Total Evidence, Uncertainty and A Priori Beliefs
Bewersdorf, Benjamin; Felline, Laura; Ledda, Antonio; Paoli, Francesco; Rossanese, Emanuele
2016-01-01
Defining the rational belief state of an agent in terms of her initial or a priori belief state as well as her total evidence can help to address a number of important philosophical problems. In this paper, I discuss how this strategy can be applied to cases in which evidence is uncertain. I argue
Experimental uncertainty estimation and statistics for data having interval uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)
2007-05-01
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Total error vs. measurement uncertainty: revolution or evolution?
Oosterhuis, Wytze P; Theodorsson, Elvar
2016-02-01
The first strategic EFLM conference "Defining analytical performance goals, 15 years after the Stockholm Conference" was held in the autumn of 2014 in Milan. It maintained the Stockholm 1999 hierarchy of performance goals but rearranged them and established five task and finish groups to work on topics related to analytical performance goals including one on the "total error" theory. Jim Westgard recently wrote a comprehensive overview of performance goals and of the total error theory critical of the results and intentions of the Milan 2014 conference. The "total error" theory originated by Jim Westgard and co-workers has a dominating influence on the theory and practice of clinical chemistry but is not accepted in other fields of metrology. The generally accepted uncertainty theory, however, suffers from complex mathematics and conceived impracticability in clinical chemistry. The pros and cons of the total error theory need to be debated, making way for methods that can incorporate all relevant causes of uncertainty when making medical diagnoses and monitoring treatment effects. This development should preferably proceed not as a revolution but as an evolution.
Code development for eigenvalue total sensitivity analysis and total uncertainty analysis
International Nuclear Information System (INIS)
Wan, Chenghui; Cao, Liangzhi; Wu, Hongchun; Zu, Tiejun; Shen, Wei
2015-01-01
Highlights: • We develop a new code for total sensitivity and uncertainty analysis. • The implicit effects of cross sections can be considered. • The results of our code agree well with TSUNAMI-1D. • Detailed analysis for origins of implicit effects is performed. - Abstract: The uncertainties of multigroup cross sections notably impact eigenvalue of neutron-transport equation. We report on a total sensitivity analysis and total uncertainty analysis code named UNICORN that has been developed by applying the direct numerical perturbation method and statistical sampling method. In order to consider the contributions of various basic cross sections and the implicit effects which are indirect results of multigroup cross sections through resonance self-shielding calculation, an improved multigroup cross-section perturbation model is developed. The DRAGON 4.0 code, with application of WIMSD-4 format library, is used by UNICORN to carry out the resonance self-shielding and neutron-transport calculations. In addition, the bootstrap technique has been applied to the statistical sampling method in UNICORN to obtain much steadier and more reliable uncertainty results. The UNICORN code has been verified against TSUNAMI-1D by analyzing the case of TMI-1 pin-cell. The numerical results show that the total uncertainty of eigenvalue caused by cross sections can reach up to be about 0.72%. Therefore the contributions of the basic cross sections and their implicit effects are not negligible
Uncertainty Model for Total Solar Irradiance Estimation on Australian Rooftops
Al-Saadi, Hassan; Zivanovic, Rastko; Al-Sarawi, Said
2017-11-01
The installations of solar panels on Australian rooftops have been in rise for the last few years, especially in the urban areas. This motivates academic researchers, distribution network operators and engineers to accurately address the level of uncertainty resulting from grid-connected solar panels. The main source of uncertainty is the intermittent nature of radiation, therefore, this paper presents a new model to estimate the total radiation incident on a tilted solar panel. Where a probability distribution factorizes clearness index, the model is driven upon clearness index with special attention being paid for Australia with the utilization of best-fit-correlation for diffuse fraction. The assessment of the model validity is achieved with the adoption of four goodness-of-fit techniques. In addition, the Quasi Monte Carlo and sparse grid methods are used as sampling and uncertainty computation tools, respectively. High resolution data resolution of solar irradiations for Adelaide city were used for this assessment, with an outcome indicating a satisfactory agreement between actual data variation and model.
International Nuclear Information System (INIS)
Monni, S.; Savolainen, I.; Peltoniemi, M.; Lehtonen, A.; Makipaa, R.; Palosuo, T.
2007-01-01
Uncertainty analysis facilitates identification of the most important categories affecting greenhouse gas (GHG) inventory uncertainty and helps in prioritisation of the efforts needed for development of the inventory. This paper presents an uncertainty analysis of GHG emissions of all Kyoto sectors and gases for Finland consolidated with estimates of emissions/removals from LULUCF categories. In Finland, net GHG emissions in 2003 were around 69 Tg (±15 Tg) CO2 equivalents. The uncertainties in forest carbon sink estimates in 2003 were larger than in most other emission categories, but of the same order of magnitude as in carbon stock change estimates in other land use, land-use change and forestry (LULUCF) categories, and in N2O emissions from agricultural soils. Uncertainties in sink estimates of 1990 were lower, due to better availability of data. Results of this study indicate that inclusion of the forest carbon sink to GHG inventories reported to the UNFCCC increases uncertainties in net emissions notably. However, the decrease in precision is accompanied by an increase in the accuracy of the overall net GHG emissions due to improved completeness of the inventory. The results of this study can be utilised when planning future GHG mitigation protocols and emission trading schemes and when analysing environmental benefits of climate conventions
International Nuclear Information System (INIS)
Moutsatsos, A.; Karaiskos, P.; Pantelis, E.; Georgiou, E.; Petrokokkinos, L.; Sakelliou, L.; Torrens, M.; Seimenis, I.
2013-01-01
Purpose: This work proposes and implements an experimental methodology, based on polymer gels, for assessing the total geometric uncertainty and characterizing its contributors in Gamma Knife (GK) radiosurgery. Methods: A treatment plan consisting of 26, 4-mm GK single shot dose distributions, covering an extended region of the Leksell stereotactic space, was prepared and delivered to a polymer gel filled polymethyl methacrylate (PMMA) head phantom (16 cm diameter) used to accurately reproduce every link in the GK treatment chain. The center of each shot served as a “control point” in the assessment of the GK total geometric uncertainty, which depends on (a) the spatial dose delivery uncertainty of the PERFEXION GK unit used in this work, (b) the spatial distortions inherent in MR images commonly used for target delineation, and (c) the geometric uncertainty contributor associated with the image registration procedure performed by the Leksell GammaPlan (LGP) treatment planning system (TPS), in the case that registration is directly based on the apparent fiducial locations depicted in each MR image by the N-shaped rods on the Leksell localization box. The irradiated phantom was MR imaged at 1.5 T employing a T2-weighted pulse sequence. Four image series were acquired by alternating the frequency encoding axis and reversing the read gradient polarity, thus allowing the characterization of the MR-related spatial distortions. Results: MR spatial distortions stemming from main field (B 0 ) inhomogeneity as well as from susceptibility and chemical shift phenomena (also known as sequence dependent distortions) were found to be of the order of 0.5 mm, while those owing to gradient nonlinearities (also known as sequence independent distortions) were found to increase with distance from the MR scanner isocenter extending up to 0.47 mm at an Euclidean distance of 69.6 mm. Regarding the LGP image registration procedure, the corresponding average contribution to the total
Moutsatsos, A; Karaiskos, P; Petrokokkinos, L; Sakelliou, L; Pantelis, E; Georgiou, E; Torrens, M; Seimenis, I
2013-03-01
This work proposes and implements an experimental methodology, based on polymer gels, for assessing the total geometric uncertainty and characterizing its contributors in Gamma Knife (GK) radiosurgery. A treatment plan consisting of 26, 4-mm GK single shot dose distributions, covering an extended region of the Leksell stereotactic space, was prepared and delivered to a polymer gel filled polymethyl methacrylate (PMMA) head phantom (16 cm diameter) used to accurately reproduce every link in the GK treatment chain. The center of each shot served as a "control point" in the assessment of the GK total geometric uncertainty, which depends on (a) the spatial dose delivery uncertainty of the PERFEXION GK unit used in this work, (b) the spatial distortions inherent in MR images commonly used for target delineation, and (c) the geometric uncertainty contributor associated with the image registration procedure performed by the Leksell GammaPlan (LGP) treatment planning system (TPS), in the case that registration is directly based on the apparent fiducial locations depicted in each MR image by the N-shaped rods on the Leksell localization box. The irradiated phantom was MR imaged at 1.5 T employing a T2-weighted pulse sequence. Four image series were acquired by alternating the frequency encoding axis and reversing the read gradient polarity, thus allowing the characterization of the MR-related spatial distortions. MR spatial distortions stemming from main field (B0) inhomogeneity as well as from susceptibility and chemical shift phenomena (also known as sequence dependent distortions) were found to be of the order of 0.5 mm, while those owing to gradient nonlinearities (also known as sequence independent distortions) were found to increase with distance from the MR scanner isocenter extending up to 0.47 mm at an Euclidean distance of 69.6 mm. Regarding the LGP image registration procedure, the corresponding average contribution to the total geometric uncertainty ranged from
Directory of Open Access Journals (Sweden)
Vicari Kristin J
2012-04-01
Full Text Available Abstract Background Cost-effective production of lignocellulosic biofuels remains a major financial and technical challenge at the industrial scale. A critical tool in biofuels process development is the techno-economic (TE model, which calculates biofuel production costs using a process model and an economic model. The process model solves mass and energy balances for each unit, and the economic model estimates capital and operating costs from the process model based on economic assumptions. The process model inputs include experimental data on the feedstock composition and intermediate product yields for each unit. These experimental yield data are calculated from primary measurements. Uncertainty in these primary measurements is propagated to the calculated yields, to the process model, and ultimately to the economic model. Thus, outputs of the TE model have a minimum uncertainty associated with the uncertainty in the primary measurements. Results We calculate the uncertainty in the Minimum Ethanol Selling Price (MESP estimate for lignocellulosic ethanol production via a biochemical conversion process: dilute sulfuric acid pretreatment of corn stover followed by enzymatic hydrolysis and co-fermentation of the resulting sugars to ethanol. We perform a sensitivity analysis on the TE model and identify the feedstock composition and conversion yields from three unit operations (xylose from pretreatment, glucose from enzymatic hydrolysis, and ethanol from fermentation as the most important variables. The uncertainty in the pretreatment xylose yield arises from multiple measurements, whereas the glucose and ethanol yields from enzymatic hydrolysis and fermentation, respectively, are dominated by a single measurement: the fraction of insoluble solids (fIS in the biomass slurries. Conclusions We calculate a $0.15/gal uncertainty in MESP from the TE model due to uncertainties in primary measurements. This result sets a lower bound on the error bars of
Fazzari, D M
2001-01-01
This report presents the results of an evaluation of the Total Measurement Uncertainty (TMU) for the Canberra manufactured Segmented Gamma Scanner Assay System (SGSAS) as employed at the Hanford Plutonium Finishing Plant (PFP). In this document, TMU embodies the combined uncertainties due to all of the individual random and systematic sources of measurement uncertainty. It includes uncertainties arising from corrections and factors applied to the analysis of transuranic waste to compensate for inhomogeneities and interferences from the waste matrix and radioactive components. These include uncertainty components for any assumptions contained in the calibration of the system or computation of the data. Uncertainties are propagated at 1 sigma. The final total measurement uncertainty value is reported at the 95% confidence level. The SGSAS is a gamma assay system that is used to assay plutonium and uranium waste. The SGSAS system can be used in a stand-alone mode to perform the NDA characterization of a containe...
Statistical approach for uncertainty quantification of experimental modal model parameters
DEFF Research Database (Denmark)
Luczak, M.; Peeters, B.; Kahsin, M.
2014-01-01
Composite materials are widely used in manufacture of aerospace and wind energy structural components. These load carrying structures are subjected to dynamic time-varying loading conditions. Robust structural dynamics identification procedure impose tight constraints on the quality of modal models...... represent different complexity levels ranging from coupon, through sub-component up to fully assembled aerospace and wind energy structural components made of composite materials. The proposed method is demonstrated on two application cases of a small and large wind turbine blade........ This paper aims at a systematic approach for uncertainty quantification of the parameters of the modal models estimated from experimentally obtained data. Statistical analysis of modal parameters is implemented to derive an assessment of the entire modal model uncertainty measure. Investigated structures...
Uncertainties of predictions from parton distributions 1, experimental errors
Martin, A D; Stirling, William James; Thorne, R S; CERN. Geneva
2003-01-01
We determine the uncertainties on observables arising from the errors on the experimental data that are fitted in the global MRST2001 parton analysis. By diagonalizing the error matrix we produce sets of partons suitable for use within the framework of linear propagation of errors, which is the most convenient method for calculating the uncertainties. Despite the potential limitations of this approach we find that it can be made to work well in practice. This is confirmed by our alternative approach of using the more rigorous Lagrange multiplier method to determine the errors on physical quantities directly. As particular examples we determine the uncertainties on the predictions of the charged-current deep-inelastic structure functions, on the cross-sections for W production and for Higgs boson production via gluon--gluon fusion at the Tevatron and the LHC, on the ratio of W-minus to W-plus production at the LHC and on the moments of the non-singlet quark distributions. We discuss the corresponding uncertain...
Experimental Realization of Popper's Experiment: Violation of Uncertainty Principle?
Kim, Yoon-Ho; Yu, Rong; Shih, Yanhua
An entangled pair of photon 1 and 2 are emitted in opposite directions along the positive and negative x-axis. A narrow slit is placed in the path of photon 1 which provides precise knowledge about its position along the y-axis and because of the quantum entanglement this in turn provides precise knowledge of the position y of its twin, photon 2. Does photon 2 experience a greater uncertainty in its momentum, i.e., a greater Δpy, due to the precise knowledge of its position y? This is the historical thought experiment of Sir Karl Popper which was aimed to undermine the Copenhagen interpretation in favor of a realistic viewpoint of quantum mechanics. Thispaper reports an experimental realization of the Popper's experiment. One may not agree with Popper's position on quantum mechanics; however, it calls for a correct understanding and interpretation of the experimental results.
SU-E-J-159: Analysis of Total Imaging Uncertainty in Respiratory-Gated Radiotherapy
International Nuclear Information System (INIS)
Suzuki, J; Okuda, T; Sakaino, S; Yokota, N
2015-01-01
Purpose: In respiratory-gated radiotherapy, the gating phase during treatment delivery needs to coincide with the corresponding phase determined during the treatment plan. However, because radiotherapy is performed based on the image obtained for the treatment plan, the time delay, motion artifact, volume effect, and resolution in the images are uncertain. Thus, imaging uncertainty is the most basic factor that affects the localization accuracy. Therefore, these uncertainties should be analyzed. This study aims to analyze the total imaging uncertainty in respiratory-gated radiotherapy. Methods: Two factors of imaging uncertainties related to respiratory-gated radiotherapy were analyzed. First, CT image was used to determine the target volume and 4D treatment planning for the Varian Realtime Position Management (RPM) system. Second, an X-ray image was acquired for image-guided radiotherapy (IGRT) for the BrainLAB ExacTrac system. These factors were measured using a respiratory gating phantom. The conditions applied during phantom operation were as follows: respiratory wave form, sine curve; respiratory cycle, 4 s; phantom target motion amplitude, 10, 20, and 29 mm (which is maximum phantom longitudinal motion). The target and cylindrical marker implanted in the phantom coverage of the CT images was measured and compared with the theoretically calculated coverage from the phantom motion. The theoretical position of the cylindrical marker implanted in the phantom was compared with that acquired from the X-ray image. The total imaging uncertainty was analyzed from these two factors. Results: In the CT image, the uncertainty between the target and cylindrical marker’s actual coverage and the coverage of CT images was 1.19 mm and 2.50mm, respectively. In the Xray image, the uncertainty was 0.39 mm. The total imaging uncertainty from the two factors was 1.62mm. Conclusion: The total imaging uncertainty in respiratory-gated radiotherapy was clinically acceptable. However
SU-E-J-159: Analysis of Total Imaging Uncertainty in Respiratory-Gated Radiotherapy
Energy Technology Data Exchange (ETDEWEB)
Suzuki, J; Okuda, T [Toyota memorial hospital, Toyota, Aichi (Japan); Sakaino, S; Yokota, N [Suzukake central hospital, Hamamatsu, Shizuoka (Japan)
2015-06-15
Purpose: In respiratory-gated radiotherapy, the gating phase during treatment delivery needs to coincide with the corresponding phase determined during the treatment plan. However, because radiotherapy is performed based on the image obtained for the treatment plan, the time delay, motion artifact, volume effect, and resolution in the images are uncertain. Thus, imaging uncertainty is the most basic factor that affects the localization accuracy. Therefore, these uncertainties should be analyzed. This study aims to analyze the total imaging uncertainty in respiratory-gated radiotherapy. Methods: Two factors of imaging uncertainties related to respiratory-gated radiotherapy were analyzed. First, CT image was used to determine the target volume and 4D treatment planning for the Varian Realtime Position Management (RPM) system. Second, an X-ray image was acquired for image-guided radiotherapy (IGRT) for the BrainLAB ExacTrac system. These factors were measured using a respiratory gating phantom. The conditions applied during phantom operation were as follows: respiratory wave form, sine curve; respiratory cycle, 4 s; phantom target motion amplitude, 10, 20, and 29 mm (which is maximum phantom longitudinal motion). The target and cylindrical marker implanted in the phantom coverage of the CT images was measured and compared with the theoretically calculated coverage from the phantom motion. The theoretical position of the cylindrical marker implanted in the phantom was compared with that acquired from the X-ray image. The total imaging uncertainty was analyzed from these two factors. Results: In the CT image, the uncertainty between the target and cylindrical marker’s actual coverage and the coverage of CT images was 1.19 mm and 2.50mm, respectively. In the Xray image, the uncertainty was 0.39 mm. The total imaging uncertainty from the two factors was 1.62mm. Conclusion: The total imaging uncertainty in respiratory-gated radiotherapy was clinically acceptable. However
Helium Mass Spectrometer Leak Detection: A Method to Quantify Total Measurement Uncertainty
Mather, Janice L.; Taylor, Shawn C.
2015-01-01
In applications where leak rates of components or systems are evaluated against a leak rate requirement, the uncertainty of the measured leak rate must be included in the reported result. However, in the helium mass spectrometer leak detection method, the sensitivity, or resolution, of the instrument is often the only component of the total measurement uncertainty noted when reporting results. To address this shortfall, a measurement uncertainty analysis method was developed that includes the leak detector unit's resolution, repeatability, hysteresis, and drift, along with the uncertainty associated with the calibration standard. In a step-wise process, the method identifies the bias and precision components of the calibration standard, the measurement correction factor (K-factor), and the leak detector unit. Together these individual contributions to error are combined and the total measurement uncertainty is determined using the root-sum-square method. It was found that the precision component contributes more to the total uncertainty than the bias component, but the bias component is not insignificant. For helium mass spectrometer leak rate tests where unit sensitivity alone is not enough, a thorough evaluation of the measurement uncertainty such as the one presented herein should be performed and reported along with the leak rate value.
Total sensitivity and uncertainty analysis for LWR pin-cells with improved UNICORN code
International Nuclear Information System (INIS)
Wan, Chenghui; Cao, Liangzhi; Wu, Hongchun; Shen, Wei
2017-01-01
Highlights: • A new model is established for the total sensitivity and uncertainty analysis. • The NR approximation applied in S&U analysis can be avoided by the new model. • Sensitivity and uncertainty analysis is performed to PWR pin-cells by the new model. • The effects of the NR approximation for the PWR pin-cells are quantified. - Abstract: In this paper, improvements to the multigroup cross-section perturbation model have been proposed and applied in the self-developed UNICORN code, which is capable of performing the total sensitivity and total uncertainty analysis for the neutron-physics calculations by applying the direct numerical perturbation method and the statistical sampling method respectively. The narrow resonance (NR) approximation was applied in the multigroup cross-section perturbation model, implemented in UNICORN. As improvements to the NR approximation to refine the multigroup cross-section perturbation model, an ultrafine-group cross-section perturbation model has been established, in which the actual perturbations are applied to the ultrafine-group cross-section library and the reconstructions of the resonance cross sections are performed by solving the neutron slowing-down equation. The total sensitivity and total uncertainty analysis were then applied to the LWR pin-cells, using both the multigroup and the ultrafine-group cross-section perturbation models. The numerical results show that the NR approximation overestimates the relative sensitivity coefficients and the corresponding uncertainty results for the LWR pin-cells, and the effects of the NR approximation are significant for σ_(_n_,_γ_) and σ_(_n_,_e_l_a_s_) of "2"3"8U. Therefore, the effects of the NR approximation applied in the total sensitivity and total uncertainty analysis for the neutron-physics calculations of LWR should be taken into account.
Quantification of tomographic PIV uncertainty using controlled experimental measurements.
Liu, Ning; Wu, Yue; Ma, Lin
2018-01-20
The goal of this work was to experimentally quantify the uncertainty of three-dimensional (3D) and three-component (3C) velocity measurements using tomographic particle image velocimetry (tomo-PIV). Controlled measurements were designed using tracer particles embedded in a solid sample, and tomo-PIV measurements were performed on the sample while it was moved both translationally and rotationally to simulate various known displacement fields, so the 3D3C displacements measured by tomo-PIV can be directly compared to the known displacements created by the sample. The results illustrated that (1) the tomo-PIV technique was able to reconstruct the 3D3C velocity with an averaged error of 0.8-1.4 voxels in terms of magnitude and 1.7°-1.9° in terms of orientation for the velocity fields tested; (2) view registration (VR) plays a significant role in tomo-PIV, and by reducing VR error from 0.6° to 0.1°, the 3D3C measurement accuracy can be improved by at least 2.5 times in terms of both magnitude and orientation; and (3) the use of additional cameras in tomo-PIV can extend the 3D3C velocity measurement to a larger volume, while maintaining acceptable accuracy. These results obtained from controlled tests are expected to aid the error analysis and the design of tomo-PIV measurements.
Total uncertainty of low velocity thermal anemometers for measurement of indoor air movements
DEFF Research Database (Denmark)
Jørgensen, F.; Popiolek, Z.; Melikov, Arsen Krikor
2004-01-01
For a specific thermal anemometer with omnidirectional velocity sensor the expanded total uncertainty in measured mean velocity Û(Vmean) and the expanded total uncertainty in measured turbulence intensity Û(Tu) due to different error sources are estimated. The values are based on a previously...... developed mathematical model of the anemometer in combination with a large database of representative room flows measured with a 3-D Laser Doppler anemometer (LDA). A direct comparison between measurements with a thermal anemometer and a 3-D LDA in flows of varying velocity and turbulence intensity shows...... good agreement not only between the two instruments but also between the thermal anemometer and its mathematical model. The differences in the measurements performed with the two instruments are all well within the measurement uncertainty of both anemometers....
Code development of total sensitivity and uncertainty analysis for reactor physics calculations
International Nuclear Information System (INIS)
Wan, C.; Cao, L.; Wu, H.; Zu, T.; Shen, W.
2015-01-01
Sensitivity and uncertainty analysis are essential parts for reactor system to perform risk and policy analysis. In this study, total sensitivity and corresponding uncertainty analysis for responses of neutronics calculations have been accomplished and developed the S&U analysis code named UNICORN. The UNICORN code can consider the implicit effects of multigroup cross sections on the responses. The UNICORN code has been applied to typical pin-cell case in this paper, and can be proved correct by comparison the results with those of the TSUNAMI-1D code. (author)
Code development of total sensitivity and uncertainty analysis for reactor physics calculations
Energy Technology Data Exchange (ETDEWEB)
Wan, C.; Cao, L.; Wu, H.; Zu, T., E-mail: chenghuiwan@stu.xjtu.edu.cn, E-mail: caolz@mail.xjtu.edu.cn, E-mail: hongchun@mail.xjtu.edu.cn, E-mail: tiejun@mail.xjtu.edu.cn [Xi' an Jiaotong Univ., School of Nuclear Science and Technology, Xi' an (China); Shen, W., E-mail: Wei.Shen@cnsc-ccsn.gc.ca [Xi' an Jiaotong Univ., School of Nuclear Science and Technology, Xi' an (China); Canadian Nuclear Safety Commission, Ottawa, ON (Canada)
2015-07-01
Sensitivity and uncertainty analysis are essential parts for reactor system to perform risk and policy analysis. In this study, total sensitivity and corresponding uncertainty analysis for responses of neutronics calculations have been accomplished and developed the S&U analysis code named UNICORN. The UNICORN code can consider the implicit effects of multigroup cross sections on the responses. The UNICORN code has been applied to typical pin-cell case in this paper, and can be proved correct by comparison the results with those of the TSUNAMI-1D code. (author)
Frey, H Christopher; Bammi, Sachin
2002-04-01
Variability refers to real differences in emissions among multiple emission sources at any given time or over time for any individual emission source. Variability in emissions can be attributed to variation in fuel or feedstock composition, ambient temperature, design, maintenance, or operation. Uncertainty refers to lack of knowledge regarding the true value of emissions. Sources of uncertainty include small sample sizes, bias or imprecision in measurements, nonrepresentativeness, or lack of data. Quantitative methods for characterizing both variability and uncertainty are demonstrated and applied to case studies of emission factors for lawn and garden (L&G) equipment engines. Variability was quantified using empirical and parametric distributions. Bootstrap simulation was used to characterize confidence intervals for the fitted distributions. The 95% confidence intervals for the mean grams per brake horsepower/hour (g/hp-hr) emission factors for two-stroke engine total hydrocarbon (THC) and NOx emissions were from -30 to +41% and from -45 to +75%, respectively. The confidence intervals for four-stroke engines were from -33 to +46% for THCs and from -27 to +35% for NOx. These quantitative measures of uncertainty convey information regarding the quality of the emission factors and serve as a basis for calculation of uncertainty in emission inventories (EIs).
Towards minimizing measurement uncertainty in total petroleum hydrocarbon determination by GC-FID
Energy Technology Data Exchange (ETDEWEB)
Saari, E.
2009-07-01
Despite tightened environmental legislation, spillages of petroleum products remain a serious problem worldwide. The environmental impacts of these spillages are always severe and reliable methods for the identification and quantitative determination of petroleum hydrocarbons in environmental samples are therefore needed. Great improvements in the definition and analysis of total petroleum hydrocarbons (TPH) were finally introduced by international organizations for standardization in 2004. This brought some coherence to the determination and, nowadays, most laboratories seem to employ ISO/DIS 16703:2004, ISO 9377-2:2000 and CEN prEN 14039:2004:E draft international standards for analysing TPH in soil. The implementation of these methods, however, usually fails because the reliability of petroleum hydrocarbon determination has proved to be poor.This thesis describes the assessment of measurement uncertainty for TPH determination in soil. Chemometric methods were used to both estimate the main uncertainty sources and identify the most significant factors affecting these uncertainty sources. The method used for the determinations was based on gas chromatography utilizing flame ionization detection (GC-FID).Chemometric methodology applied in estimating measurement uncertainty for TPH determination showed that the measurement uncertainty is in actual fact dominated by the analytical uncertainty. Within the specific concentration range studied, the analytical uncertainty accounted for as much as 68-80% of the measurement uncertainty. The robustness of the analytical method used for petroleum hydrocarbon determination was then studied in more detail. A two-level Plackett-Burman design and a D-optimal design were utilized to assess the main analytical uncertainty sources of the sample treatment and GC determination procedures. It was also found that the matrix-induced systematic error may also significantly reduce the reliability of petroleum hydrocarbon determination
International Nuclear Information System (INIS)
Keele, B.D.
2005-01-01
A collimated portable gamma-ray detector will be used to quantify the plutonium content of items that can be approximated as a point, line, or area geometry with respect to the detector. These items can include ducts, piping, glove boxes, isolated equipment inside of gloveboxes, and HEPA filters. The Generalized Geometry Holdup (GGH) model is used for the reduction of counting data. This document specifies the calculations to reduce counting data into contained plutonium and the associated total measurement uncertainty.
Collaborative framework for PIV uncertainty quantification: the experimental database
International Nuclear Information System (INIS)
Neal, Douglas R; Sciacchitano, Andrea; Scarano, Fulvio; Smith, Barton L
2015-01-01
The uncertainty quantification of particle image velocimetry (PIV) measurements has recently become a topic of great interest as shown by the recent appearance of several different methods within the past few years. These approaches have different working principles, merits and limitations, which have been speculated upon in subsequent studies. This paper reports a unique experiment that has been performed specifically to test the efficacy of PIV uncertainty methods. The case of a rectangular jet, as previously studied by Timmins et al (2012) and Wilson and Smith (2013b), is used. The novel aspect of the experiment is simultaneous velocity measurements using two different time-resolved PIV systems and a hot-wire anemometry (HWA) system. The first PIV system, called the PIV measurement system (‘PIV-MS’), is intended for nominal measurements of which the uncertainty is to be evaluated. It is based on a single camera and features a dynamic velocity range (DVR) representative of typical PIV experiments. The second PIV system, called the ‘PIV-HDR’ (high dynamic range) system, features a significantly higher DVR obtained with a higher digital imaging resolution. The hot-wire is placed in close proximity to the PIV measurement domain. The three measurement systems were carefully set to simultaneously measure the flow velocity at the same time and location. The comparison between the PIV-HDR system and the HWA provides an estimate of the measurement precision of the reference velocity for evaluation of the instantaneous error in the measurement system. The discrepancy between the PIV-MS and the reference data provides the measurement error, which is later used to assess the different uncertainty quantification methods proposed in the literature. A detailed comparison of the uncertainty estimation methods based on the present datasets is presented in a second paper from Sciacchitano et al (2015). Furthermore, this database offers the potential to be used for
Iso-uncertainty control in an experimental fluoroscopy system
International Nuclear Information System (INIS)
Siddique, S.; Fiume, E.; Jaffray, D. A.
2014-01-01
Purpose: X-ray fluoroscopy remains an important imaging modality in a number of image-guided procedures due to its real-time nature and excellent spatial detail. However, the radiation dose delivered raises concerns about its use particularly in lengthy treatment procedures (>0.5 h). The authors have previously presented an algorithm that employs feedback of geometric uncertainty to control dose while maintaining a desired targeting uncertainty during fluoroscopic tracking of fiducials. The method was tested using simulations of motion against controlled noise fields. In this paper, the authors embody the previously reported method in a physical prototype and present changes to the controller required to function in a practical setting. Methods: The metric for feedback used in this study is based on the trace of the covariance of the state of the system, tr(C). The state is defined here as the 2D location of a fiducial on a plane parallel to the detector. A relationship between this metric and the tube current is first developed empirically. This relationship is extended to create a manifold that incorporates a latent variable representing the estimated background attenuation. The manifold is then used within the controller to dynamically adjust the tube current and maintain a specified targeting uncertainty. To evaluate the performance of the proposed method, an acrylic sphere (1.6 mm in diameter) was tracked at tube currents ranging from 0.5 to 0.9 mA (0.033 s) at a fixed energy of 80 kVp. The images were acquired on a Varian Paxscan 4030A (2048 × 1536 pixels, ∼100 cm source-to-axis distance, ∼160 cm source-to-detector distance). The sphere was tracked using a particle filter under two background conditions: (1) uniform sheets of acrylic and (2) an acrylic wedge. The measured tr(C) was used in conjunction with a learned manifold to modulate the tube current in order to maintain a specified uncertainty as the sphere traversed regions of varying thickness
Directory of Open Access Journals (Sweden)
Fröhlich Claus
2016-01-01
Full Text Available Aims. The existing records of total solar irradiance (TSI since 1978 differ not only in absolute values, but also show different trends. For the study of TSI variability these records need to be combined and three composites have been devised; however, the results depend on the choice of the records and the way they are combined. A new composite should be based on all existing records with an individual qualification. It is proposed to use a time-dependent uncertainty for weighting of the individual records. Methods. The determination of the time-dependent deviation of the TSI records is performed by comparison with the square root of the sunspot number (SSN. However, this correlation is only valid for timescales of the order of a year or more because TSI and SSN react quite differently to solar activity changes on shorter timescales. Hence the results concern only periods longer than the one-year-low-pass filter used in the analysis. Results. Besides the main objective to determine an investigator-independent uncertainty, the comparison of TSI with √SSN turns out to be a powerful tool for the study of the TSI long-term changes. The correlation of √SSN with TSI replicates very well the TSI minima, especially the very low value of the recent minimum. The results of the uncertainty determination confirm not only the need for adequate corrections for degradation, but also show that a rather detailed analysis is needed. The daily average of all TSI values available on that day, weighted with the correspondingly determined uncertainty, is used to construct a “new” composite, which, overall, compares well with the Physikalisch-Meteorologisches Observatorium Davos (PMOD composite. Finally, the TSI − √SSN comparison proves to be an important diagnostic tool not only for estimating uncertainties of observations, but also for a better understanding of the long-term variability of TSI.
Impact of measurement uncertainty from experimental load distribution factors on bridge load rating
Gangone, Michael V.; Whelan, Matthew J.
2018-03-01
Load rating and testing of highway bridges is important in determining the capacity of the structure. Experimental load rating utilizes strain transducers placed at critical locations of the superstructure to measure normal strains. These strains are then used in computing diagnostic performance measures (neutral axis of bending, load distribution factor) and ultimately a load rating. However, it has been shown that experimentally obtained strain measurements contain uncertainties associated with the accuracy and precision of the sensor and sensing system. These uncertainties propagate through to the diagnostic indicators that in turn transmit into the load rating calculation. This paper will analyze the effect that measurement uncertainties have on the experimental load rating results of a 3 span multi-girder/stringer steel and concrete bridge. The focus of this paper will be limited to the uncertainty associated with the experimental distribution factor estimate. For the testing discussed, strain readings were gathered at the midspan of each span of both exterior girders and the center girder. Test vehicles of known weight were positioned at specified locations on each span to generate maximum strain response for each of the five girders. The strain uncertainties were used in conjunction with a propagation formula developed by the authors to determine the standard uncertainty in the distribution factor estimates. This distribution factor uncertainty is then introduced into the load rating computation to determine the possible range of the load rating. The results show the importance of understanding measurement uncertainty in experimental load testing.
Impatience and uncertainty : Experimental decisions predict adolescents' field behavior
Sutter, M.; Kocher, M.G.; Rützler, D.; Trautmann, S.T.
2013-01-01
We study risk attitudes, ambiguity attitudes, and time preferences of 661 children and adolescents, aged ten to eighteen years, in an incentivized experiment and relate experimental choices to field behavior. Experimental measures of impatience are found to be significant predictors of
Energy Technology Data Exchange (ETDEWEB)
Turinsky, Paul J [North Carolina State Univ., Raleigh, NC (United States); Abdel-Khalik, Hany S [North Carolina State Univ., Raleigh, NC (United States); Stover, Tracy E [North Carolina State Univ., Raleigh, NC (United States)
2011-03-01
An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment
Indian Academy of Sciences (India)
To reflect this uncertainty in the climate scenarios, the use of AOGCMs that explicitly simulate the carbon cycle and chemistry of all the substances are needed. The Hadley Centre has developed a version of the climate model that allows the effect of climate change on the carbon cycle and its feedback into climate, to be ...
International Nuclear Information System (INIS)
Silva, T.A. da
1988-01-01
The comparison between the uncertainty method recommended by International Atomic Energy Agency (IAEA) and the and the International Weight and Measure Commitee (CIPM) are showed, for the calibration of clinical dosimeters in the secondary standard Dosimetry Laboratory (SSDL). (C.G.C.) [pt
Experimental Test of Entropic Noise-Disturbance Uncertainty Relations for Spin-1/2 Measurements.
Sulyok, Georg; Sponar, Stephan; Demirel, Bülent; Buscemi, Francesco; Hall, Michael J W; Ozawa, Masanao; Hasegawa, Yuji
2015-07-17
Information-theoretic definitions for noise and disturbance in quantum measurements were given in [Phys. Rev. Lett. 112, 050401 (2014)] and a state-independent noise-disturbance uncertainty relation was obtained. Here, we derive a tight noise-disturbance uncertainty relation for complementary qubit observables and carry out an experimental test. Successive projective measurements on the neutron's spin-1/2 system, together with a correction procedure which reduces the disturbance, are performed. Our experimental results saturate the tight noise-disturbance uncertainty relation for qubits when an optimal correction procedure is applied.
International Nuclear Information System (INIS)
Amendola, A.; Astolfi, M.; Lisanti, B.
1983-01-01
The report describes the how-to-use of the codes: MUP (Monte Carlo Uncertainty Propagation) for uncertainty analysis by Monte Carlo simulation, including correlation analysis, extreme value identification and study of selected ranges of the variable space; CEC-DES (Central Composite Design) for building experimental matrices according to the requirements of Central Composite and Factorial Experimental Designs; and, STRADE (Stratified Random Design) for experimental designs based on the Latin Hypercube Sampling Techniques. Application fields, of the codes are probabilistic risk assessment, experimental design, sensitivity analysis and system identification problems
Uncertainty Analysis of RBMK-Related Experimental Data
International Nuclear Information System (INIS)
Urbonas, Rolandas; Kaliatka, Algirdas; Liaukonis, Mindaugas
2002-01-01
An attempt to validate state-of-the-art thermal hydraulic code ATHLET (GRS, Germany) on the basis of E-108 test facility was made. Originally this code was developed and validated for different type reactors than RBMK. Since state-of-art thermal hydraulic codes are widely used for simulation of RBMK reactors, further codes' implementation and validation is required. The phenomena associated with channel type flow instabilities and CHF were found to be an important step in the frame of the overall effort of state-of-the-art validation and application for RBMK reactors. In the paper one-channel approach analysis is presented. Thus, the oscillatory behaviour of the system was not detected. The results show dependence on the nodalization used in the heated channels, initial and boundary conditions and code selected models. It is shown that the code is able to predict a sudden heat structure temperature excursion, when critical heat flux is approached. GRS developed uncertainty and sensitivity methodology was employed in the analysis. (authors)
Pulsed total dose damage effect experimental study on EPROM
International Nuclear Information System (INIS)
Luo Yinhong; Yao Zhibin; Zhang Fengqi; Guo Hongxia; Zhang Keying; Wang Yuanming; He Baoping
2011-01-01
Nowadays, memory radiation effect study mainly focus on functionality measurement. Measurable parameters is few in china. According to the present situation, threshold voltage testing method was presented on floating gate EPROM memory. Experimental study of pulsed total dose effect on EPROM threshold voltage was carried out. Damage mechanism was analysed The experiment results showed that memory cell threshold voltage negative shift was caused by pulsed total dose, memory cell threshold voltage shift is basically coincident under steady bias supply and no bias supply. (authors)
International Nuclear Information System (INIS)
Santana, L V; Sarkis, J E S; Ulrich, J C; Hortellani, M A
2015-01-01
We provide an uncertainty estimates for homogeneity and stability studies of reference material used in proficiency test for determination of total mercury in fish fresh muscle tissue. Stability was estimated by linear regression and homogeneity by ANOVA. The results indicate that the reference material is both homogeneous and chemically stable over the short term. Total mercury concentration of the muscle tissue, with expanded uncertainty, was 0.294 ± 0.089 μg g −1
An experimental comparison of triggered and random pulse train uncertainties
International Nuclear Information System (INIS)
Henzlova, Daniela; Menlove, Howard O.; Swinhoe, Martyn T.
2010-01-01
In this paper we present an experimental comparison of signal-triggered and randomly triggered based analysis algorithms of neutron multiplicity data. Traditional shift register type signal-triggered multiplicity analysis of singles, doubles and triples rates is compared with analysis using randomly triggered gates. Two methods of random gate generation are explored - non-overlapping gates (Feyrunan approach) and periodic overlapping gates (fast accidentals). Using californium sources with low, medium and high rate in combination with AmLi sources (as a surrogate for plutonium) we investigate relative standard deviation (RSD) of data in order to determine if there are parameter spaces in which one of the measurement methods should be preferred. Neutron correlation analysis is a commonly used NDA technique to assay plutonium mass. The data can be collected in two distinct ways: using signal-triggered or randomly triggered counting gates. Analysis algorithms were developed for both approaches to determine singles (S), doubles (D) and triples (7) rates from the measured sample. Currently the most commonly implemented technique to collect neutron coincidence data utilizes shift register based electronics. Shift register uses signal-triggered counting gates to generate foreground multiplicity distribution of correlated+accidental events and a random gate (opened after a predefined long delay following the signal trigger) to generate background multiplicity distribution of accidental events. Modern shift registers include fast accidental option to sample data with a fixed clock frequency. This way a set of overlapping gates is used to generate background multiplicity distributions in order to improve the measurement precision. In parallel to shift register approach the Feynman variance technique is frequently used, which utilizes set of consecutive non-overlapping gates. In general, different user communities (e.g. safeguards, nuclear material accountancy, emergency
International Nuclear Information System (INIS)
Kalinich, D. A.; Wilson, M. L.
2001-01-01
Seepage into the repository drifts is an important factor in total-system performance. Uncertainty and spatial variability are considered in the seepage calculations. The base-case results show 13.6% of the waste packages (WPs) have seepage. For 5th percentile uncertainty, 4.5% of the WPs have seepage and the seepage flow decreased by a factor of 2. For 95th percentile uncertainty, 21.5% of the WPs have seepage and the seepage flow increased by a factor of 2. Ignoring spatial variability resulted in seepage on 100% of the WPs, with a factor of 3 increase in the seepage flow
Treatment of experimental myasthenia gravis with total lymphoid irradiation
International Nuclear Information System (INIS)
de Silva, S.; Blum, J.E.; McIntosh, K.R.; Order, S.; Drachman, D.B.
1988-01-01
Total lymphoid irradiation (TLI) has been reported to be effective in the immunosuppressive treatment of certain human and experimental autoimmune disorders. We have investigated the effects of TLI in Lewis rats with experimental autoimmune myasthenia gravis (EAMG) produced by immunization with purified torpedo acetylcholine receptor (AChR). The radiation is given in 17 divided fractions of 200 rad each, and nonlymphoid tissues are protected by lead shielding. This technique suppresses the immune system, while minimizing side effects, and permits the repopulation of the immune system by the patient's own bone marrow cells. Our results show that TLI treatment completely prevented the primary antibody response to immunization with torpedo AChR, it rapidly abolished the ongoing antibody response in established EAMG, and it suppressed the secondary (anamnestic) response to a boost of AChR. No EAMG animals died during TLI treatment, compared with six control animals that died of EAMG. TLI produces powerful and prompt immunosuppression and may eventually prove useful in the treatment of refractory human myasthenia gravis
Treatment of experimental myasthenia gravis with total lymphoid irradiation
Energy Technology Data Exchange (ETDEWEB)
de Silva, S.; Blum, J.E.; McIntosh, K.R.; Order, S.; Drachman, D.B.
1988-07-01
Total lymphoid irradiation (TLI) has been reported to be effective in the immunosuppressive treatment of certain human and experimental autoimmune disorders. We have investigated the effects of TLI in Lewis rats with experimental autoimmune myasthenia gravis (EAMG) produced by immunization with purified torpedo acetylcholine receptor (AChR). The radiation is given in 17 divided fractions of 200 rad each, and nonlymphoid tissues are protected by lead shielding. This technique suppresses the immune system, while minimizing side effects, and permits the repopulation of the immune system by the patient's own bone marrow cells. Our results show that TLI treatment completely prevented the primary antibody response to immunization with torpedo AChR, it rapidly abolished the ongoing antibody response in established EAMG, and it suppressed the secondary (anamnestic) response to a boost of AChR. No EAMG animals died during TLI treatment, compared with six control animals that died of EAMG. TLI produces powerful and prompt immunosuppression and may eventually prove useful in the treatment of refractory human myasthenia gravis.
Pun, Betty Kong-Ling
1998-12-01
Uncertainty is endemic in modeling. This thesis is a two- phase program to understand the uncertainties in urban air pollution model predictions and in field data used to validate them. Part I demonstrates how to improve atmospheric models by analyzing the uncertainties in these models and using the results to guide new experimentation endeavors. Part II presents an experiment designed to characterize atmospheric fluctuations, which have significant implications towards the model validation process. A systematic study was undertaken to investigate the effects of uncertainties in the SAPRC mechanism for gas- phase chemistry in polluted atmospheres. The uncertainties of more than 500 parameters were compiled, including reaction rate constants, product coefficients, organic composition, and initial conditions. Uncertainty propagation using the Deterministic Equivalent Modeling Method (DEMM) revealed that the uncertainties in ozone predictions can be up to 45% based on these parametric uncertainties. The key parameters found to dominate the uncertainties of the predictions include photolysis rates of NO2, O3, and formaldehyde; the rate constant for nitric acid formation; and initial amounts of NOx and VOC. Similar uncertainty analysis procedures applied to two other mechanisms used in regional air quality models led to the conclusion that in the presence of parametric uncertainties, the mechanisms cannot be discriminated. Research efforts should focus on reducing parametric uncertainties in photolysis rates, reaction rate constants, and source terms. A new tunable diode laser (TDL) infrared spectrometer was designed and constructed to measure multiple pollutants simultaneously in the same ambient air parcels. The sensitivities of the one hertz measurements were 2 ppb for ozone, 1 ppb for NO, and 0.5 ppb for NO2. Meteorological data were also collected for wind, temperature, and UV intensity. The field data showed clear correlations between ozone, NO, and NO2 in the one
Witte, Jacquelyn C.; Thompson, Anne M.; Smit, Herman G. J.; Vömel, Holger; Posny, Françoise; Stübi, Rene
2018-03-01
Reprocessed ozonesonde data from eight SHADOZ (Southern Hemisphere ADditional OZonesondes) sites have been used to derive the first analysis of uncertainty estimates for both profile and total column ozone (TCO). The ozone uncertainty is a composite of the uncertainties of the individual terms in the ozone partial pressure (PO3) equation, those being the ozone sensor current, background current, internal pump temperature, pump efficiency factors, conversion efficiency, and flow rate. Overall, PO3 uncertainties (ΔPO3) are within 15% and peak around the tropopause (15 ± 3 km) where ozone is a minimum and ΔPO3 approaches the measured signal. The uncertainty in the background and sensor currents dominates the overall ΔPO3 in the troposphere including the tropopause region, while the uncertainties in the conversion efficiency and flow rate dominate in the stratosphere. Seasonally, ΔPO3 is generally a maximum in the March-May, with the exception of SHADOZ sites in Asia, for which the highest ΔPO3 occurs in September-February. As a first approach, we calculate sonde TCO uncertainty (ΔTCO) by integrating the profile ΔPO3 and adding the ozone residual uncertainty, derived from the McPeters and Labow (2012, doi:10.1029/2011JD017006) 1σ ozone mixing ratios. Overall, ΔTCO are within ±15 Dobson units (DU), representing 5-6% of the TCO. Total Ozone Mapping Spectrometer and Ozone Monitoring Instrument (TOMS and OMI) satellite overpasses are generally within the sonde ΔTCO. However, there is a discontinuity between TOMS v8.6 (1998 to September 2004) and OMI (October 2004-2016) TCO on the order of 10 DU that accounts for the significant 16 DU overall difference observed between sonde and TOMS. By comparison, the sonde-OMI absolute difference for the eight stations is only 4 DU.
A comparative experimental evaluation of uncertainty estimation methods for two-component PIV
Boomsma, Aaron; Bhattacharya, Sayantan; Troolin, Dan; Pothos, Stamatios; Vlachos, Pavlos
2016-09-01
Uncertainty quantification in planar particle image velocimetry (PIV) measurement is critical for proper assessment of the quality and significance of reported results. New uncertainty estimation methods have been recently introduced generating interest about their applicability and utility. The present study compares and contrasts current methods, across two separate experiments and three software packages in order to provide a diversified assessment of the methods. We evaluated the performance of four uncertainty estimation methods, primary peak ratio (PPR), mutual information (MI), image matching (IM) and correlation statistics (CS). The PPR method was implemented and tested in two processing codes, using in-house open source PIV processing software (PRANA, Purdue University) and Insight4G (TSI, Inc.). The MI method was evaluated in PRANA, as was the IM method. The CS method was evaluated using DaVis (LaVision, GmbH). Utilizing two PIV systems for high and low-resolution measurements and a laser doppler velocimetry (LDV) system, data were acquired in a total of three cases: a jet flow and a cylinder in cross flow at two Reynolds numbers. LDV measurements were used to establish a point validation against which the high-resolution PIV measurements were validated. Subsequently, the high-resolution PIV measurements were used as a reference against which the low-resolution PIV data were assessed for error and uncertainty. We compared error and uncertainty distributions, spatially varying RMS error and RMS uncertainty, and standard uncertainty coverages. We observed that qualitatively, each method responded to spatially varying error (i.e. higher error regions resulted in higher uncertainty predictions in that region). However, the PPR and MI methods demonstrated reduced uncertainty dynamic range response. In contrast, the IM and CS methods showed better response, but under-predicted the uncertainty ranges. The standard coverages (68% confidence interval) ranged from
A comparative experimental evaluation of uncertainty estimation methods for two-component PIV
International Nuclear Information System (INIS)
Boomsma, Aaron; Troolin, Dan; Pothos, Stamatios; Bhattacharya, Sayantan; Vlachos, Pavlos
2016-01-01
Uncertainty quantification in planar particle image velocimetry (PIV) measurement is critical for proper assessment of the quality and significance of reported results. New uncertainty estimation methods have been recently introduced generating interest about their applicability and utility. The present study compares and contrasts current methods, across two separate experiments and three software packages in order to provide a diversified assessment of the methods. We evaluated the performance of four uncertainty estimation methods, primary peak ratio (PPR), mutual information (MI), image matching (IM) and correlation statistics (CS). The PPR method was implemented and tested in two processing codes, using in-house open source PIV processing software (PRANA, Purdue University) and Insight4G (TSI, Inc.). The MI method was evaluated in PRANA, as was the IM method. The CS method was evaluated using DaVis (LaVision, GmbH). Utilizing two PIV systems for high and low-resolution measurements and a laser doppler velocimetry (LDV) system, data were acquired in a total of three cases: a jet flow and a cylinder in cross flow at two Reynolds numbers. LDV measurements were used to establish a point validation against which the high-resolution PIV measurements were validated. Subsequently, the high-resolution PIV measurements were used as a reference against which the low-resolution PIV data were assessed for error and uncertainty. We compared error and uncertainty distributions, spatially varying RMS error and RMS uncertainty, and standard uncertainty coverages. We observed that qualitatively, each method responded to spatially varying error (i.e. higher error regions resulted in higher uncertainty predictions in that region). However, the PPR and MI methods demonstrated reduced uncertainty dynamic range response. In contrast, the IM and CS methods showed better response, but under-predicted the uncertainty ranges. The standard coverages (68% confidence interval) ranged from
International Nuclear Information System (INIS)
Jin Danqing; Andrec, Michael; Montelione, Gaetano T.; Levy, Ronald M.
1998-01-01
In this paper we make use of the graphical procedure previously described [Jin, D. et al. (1997) J. Am. Chem. Soc., 119, 6923-6924] to analyze NMR relaxation data using the Lipari-Szabo model-free formalism. The graphical approach is advantageous in that it allows the direct visualization of the experimental uncertainties in the motional parameter space. Some general 'rules' describing the relationship between the precision of the relaxation measurements and the precision of the model-free parameters and how this relationship changes with the overall tumbling time (τm) are summarized. The effect of the precision in the relaxation measurements on the detection of internal motions not close to the extreme narrowing limit is analyzed. We also show that multiple timescale internal motions may be obscured by experimental uncertainty, and that the collection of relaxation data at very high field strength can improve the ability to detect such deviations from the simple Lipari-Szabo model
International Nuclear Information System (INIS)
WILLS, C.E.
1999-01-01
This report examines the contributing factors to NDA measurement uncertainty at WRAP. The significance of each factor on the TMU is analyzed, and a final method is given for determining the TMU for NDA measurements at WRAP. As more data becomes available, and WRAP gains in operational experience, this report will be reviewed semi-annually and updated as necessary
Study of Monte Carlo approach to experimental uncertainty propagation with MSTW 2008 PDFs
Watt, G.
2012-01-01
We investigate the Monte Carlo approach to propagation of experimental uncertainties within the context of the established 'MSTW 2008' global analysis of parton distribution functions (PDFs) of the proton at next-to-leading order in the strong coupling. We show that the Monte Carlo approach using replicas of the original data gives PDF uncertainties in good agreement with the usual Hessian approach using the standard Delta(chi^2) = 1 criterion, then we explore potential parameterisation bias by increasing the number of free parameters, concluding that any parameterisation bias is likely to be small, with the exception of the valence-quark distributions at low momentum fractions x. We motivate the need for a larger tolerance, Delta(chi^2) > 1, by making fits to restricted data sets and idealised consistent or inconsistent pseudodata. Instead of using data replicas, we alternatively produce PDF sets randomly distributed according to the covariance matrix of fit parameters including appropriate tolerance values,...
Deng, Yue
2014-01-01
Describes solar energy inputs contributing to ionospheric and thermospheric weather processes, including total energy amounts, distributions and the correlation between particle precipitation and Poynting flux.
Directory of Open Access Journals (Sweden)
Robson L. Franklin
2012-01-01
Full Text Available The purpose of the present study was to validate a method for organic Hg determination in sediment. The procedure for organic Hg was adapted from literature, where the organomercurial compounds were extracted with dichloromethane in acid medium and subsequent destruction of organic compounds by bromine chloride. Total Hg was performed according to 3051A USEPA methodology. Mercury quantification for both methodologies was then performed by CVAAS. Methodology validation was verified by analyzing certified reference materials for total Hg and methylmercury. The uncertainties for both methodologies were calculated. The quantification limit of 3.3 µg kg-1 was found for organic Hg by CVAAS.
DEFF Research Database (Denmark)
Luczak, Marcin; Peeters, Bart; Kahsin, Maciej
2014-01-01
for uncertainty evaluation in experimentally estimated models. Investigated structures are plates, fuselage panels and helicopter main rotor blades as they represent different complexity levels ranging from coupon, through sub-component up to fully assembled structures made of composite materials. To evaluate......Aerospace and wind energy structures are extensively using components made of composite materials. Since these structures are subjected to dynamic environments with time-varying loading conditions, it is important to model their dynamic behavior and validate these models by means of vibration...
Serum total proteins and creatinine levels in experimental gambian ...
African Journals Online (AJOL)
Attempt was therefore made to evaluate the effect of two strains of Trypanosoma brucei gambiense on total proteins and other serum biochemical parameters using vervet monkeys as a model. The outcome of both strains in vervet monkeys was traumatic as the monkeys died from infection 12 – 15 weeks post infection while ...
Energy Technology Data Exchange (ETDEWEB)
Cruz, D.F. da; Rochman, D.; Koning, A.J. [Nuclear Research and Consultancy Group NRG, Petten (Netherlands)
2014-07-01
The Total Monte-Carlo (TMC) method has been applied extensively since 2008 to propagate the uncertainties in nuclear data for reactor parameters and fuel inventory, and for several types of advanced nuclear systems. The analyses have been performed considering different levels of complexity, ranging from a single fuel rod to a full 3-D reactor core at steady-state. The current work applies the TMC method for a full 3-D pressurized water reactor core model under steady-state and transient conditions, considering thermal-hydraulic feedback. As a transient scenario the study focused on a reactivity-initiated accident, namely a control rod ejection accident initiated by a mechanical failure of the control rod drive mechanism. The uncertainties on the main reactor parameters due to variations in nuclear data for the isotopes {sup 235},{sup 238}U, {sup 239}Pu and thermal scattering data for {sup 1}H in water were quantified. (author)
Haffner, D. P.; Bhartia, P. K.; Li, J. Y.
2012-12-01
With the launch of the BUV instrument on NASA's Nimbus-4 satellite in April 1970, ozone became one of the first atmospheric variables to be measured from space with high accuracy. By 1980, the quality of total column ozone measured from the TOMS instrument on the Nimbus-7 satellite had improved to the point that it started to be used to identify poorly calibrated instruments in the venerable Dobson ground-based network. Now we have a total ozone record spanning 42 years created by more than a dozen instruments. We will discuss the issues and challenges that we have faced in creating a consistent long-term record and in providing uncertainty estimates. This work is not yet finished. We are currently developing a new algorithm (Version 9) that will be used to reprocess the entire record. The main motivation for developing this algorithm is not so much to improve the quality of the data, which is quite high already, but to provide better estimates of uncertainties when errors are spatially and temporally correlated, and to develop better techniques to catch "Black Swan" events (BSE). These are events that occur infrequently but cause errors larger than expected by Gaussian probability distribution. For example, the eruption of El Chichón revealed that our ozone algorithm had unexpected sensitivity to volcanic SO2, and evidence of the ozone hole was initially interpreted as a problem with the TOMS instrument. We also provide mathematical operators that can be applied by sophisticated users to compute their own uncertainties for their particular applications. This is necessary because uncertainties change in complex ways when the data are smoothed or averaged. The modern data archival system should be designed to accommodate such operators and provide software for using them.
Experimental data bases useful for quantification of model uncertainties in best estimate codes
International Nuclear Information System (INIS)
Wilson, G.E.; Katsma, K.R.; Jacobson, J.L.; Boodry, K.S.
1988-01-01
A data base is necessary for assessment of thermal hydraulic codes within the context of the new NRC ECCS Rule. Separate effect tests examine particular phenomena that may be used to develop and/or verify models and constitutive relationships in the code. Integral tests are used to demonstrate the capability of codes to model global characteristics and sequence of events for real or hypothetical transients. The nuclear industry has developed a large experimental data base of fundamental nuclear, thermal-hydraulic phenomena for code validation. Given a particular scenario, and recognizing the scenario's important phenomena, selected information from this data base may be used to demonstrate applicability of a particular code to simulate the scenario and to determine code model uncertainties. LBLOCA experimental data bases useful to this objective are identified in this paper. 2 tabs
Directory of Open Access Journals (Sweden)
Douglas Domingues Bueno
2008-01-01
Full Text Available This paper deals with the study of algorithms for robust active vibration control in flexible structures considering uncertainties in system parameters. It became an area of enormous interest, mainly due to the countless demands of optimal performance in mechanical systems as aircraft, aerospace, and automotive structures. An important and difficult problem for designing active vibration control is to get a representative dynamic model. Generally, this model can be obtained using finite element method (FEM or an identification method using experimental data. Actuators and sensors may affect the dynamics properties of the structure, for instance, electromechanical coupling of piezoelectric material must be considered in FEM formulation for flexible and lightly damping structure. The nonlinearities and uncertainties involved in these structures make it a difficult task, mainly for complex structures as spatial truss structures. On the other hand, by using an identification method, it is possible to obtain the dynamic model represented through a state space realization considering this coupling. This paper proposes an experimental methodology for vibration control in a 3D truss structure using PZT wafer stacks and a robust control algorithm solved by linear matrix inequalities.
Experimental study on total dissolved gas supersaturation in water
Directory of Open Access Journals (Sweden)
Lu Qu
2011-12-01
Full Text Available More and more high dams have been constructed and operated in China. The total dissolved gas (TDG supersaturation caused by dam discharge leads to gas bubble disease or even death of fish. Through a series of experiments, the conditions and requirements of supersaturated TDG generation were examined in this study. The results show that pressure (water depth, aeration, and bubble dissolution time are required for supersaturated TDG generation, and the air-water contact area and turbulence intensity are the main factors that affect the generation rate of supersaturated TDG. The TDG supersaturation levels can be reduced by discharging water to shallow shoals downstream of the dam or using negative pressure pipelines. Furthermore, the TDG supersaturation levels in stilling basins have no direct relationship with those in reservoirs. These results are of great importance for further research on the prediction of supersaturated TDG generation caused by dam discharge and aquatic protection.
Nugraha, W. C.; Elishian, C.; Ketrin, R.
2017-03-01
Fish containing arsenic compound is one of the important indicators of arsenic contamination in water monitoring. The high level of arsenic in fish is due to absorption through food chain and accumulated in their habitat. Hydride generation (HG) coupled with atomic absorption spectrometric (AAS) detection is one of the most popular techniques employed for arsenic determination in a variety of matrices including fish. This study aimed to develop a method for the determination of total arsenic in fish by HG-AAS. The method for sample preparation from American of Analytical Chemistry (AOAC) Method 999.10-2005 was adopted for acid digestion using microwave digestion system and AOAC Method 986.15 - 2005 for dry ashing. The method was developed and validated using Certified Reference Material DORM 3 Fish Protein for trace metals for ensuring the accuracy and the traceability of the results. The sources of uncertainty of the method were also evaluated. By using the method, it was found that the total arsenic concentration in the fish was 45.6 ± 1.22 mg.Kg-1 with a coverage factor of equal to 2 at 95% of confidence level. Evaluation of uncertainty was highly influenced by the calibration curve. This result was also traceable to International Standard System through analysis of Certified Reference Material DORM 3 with 97.5% of recovery. In summary, it showed that method of preparation and HG-AAS technique for total arsenic determination in fish were valid and reliable.
Pu239 Cross-Section Variations Based on Experimental Uncertainties and Covariances
Energy Technology Data Exchange (ETDEWEB)
Sigeti, David Edward [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Brian J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, D. Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-10-18
Algorithms and software have been developed for producing variations in plutonium-239 neutron cross sections based on experimental uncertainties and covariances. The varied cross-section sets may be produced as random samples from the multi-variate normal distribution defined by an experimental mean vector and covariance matrix, or they may be produced as Latin-Hypercube/Orthogonal-Array samples (based on the same means and covariances) for use in parametrized studies. The variations obey two classes of constraints that are obligatory for cross-section sets and which put related constraints on the mean vector and covariance matrix that detemine the sampling. Because the experimental means and covariances do not obey some of these constraints to sufficient precision, imposing the constraints requires modifying the experimental mean vector and covariance matrix. Modification is done with an algorithm based on linear algebra that minimizes changes to the means and covariances while insuring that the operations that impose the different constraints do not conflict with each other.
Experimental Research Examining how People can Cope with Uncertainty through Soft Haptic Sensations
Van Horen, F.; Mussweiler, T.
2015-01-01
Human beings are constantly surrounded by uncertainty and change. The question arises how people cope with such uncertainty. To date, most research has focused on the cognitive strategies people adopt to deal with uncertainty. However, especially when uncertainty is due to unpredictable societal
Elsworth, D.
2013-12-01
Significant uncertainties remain and influence the recovery of energy from the subsurface. These uncertainties include the fate and transport of long-lived radioactive wastes that result from the generation of nuclear power and have been the focus of an active network of international underground research laboratories dating back at least 35 years. However, other nascent carbon-free energy technologies including conventional and EGS geothermal methods, carbon-neutral methods such as carbon capture and sequestration and the utilization of reduced-carbon resources such as unconventional gas reservoirs offer significant challenges in their effective deployment. We illustrate the important role that in situ experiments may play in resolving behaviors at extended length- and time-scales for issues related to chemical-mechanical interactions. Significantly, these include the evolution of transport and mechanical characteristics of stress-sensitive fractured media and their influence of the long-term behavior of the system. Importantly, these interests typically relate to either creating reservoirs (hydroshearing in EGS reservoirs, artificial fractures in shales and coals) or maintaining seals at depth where the permeating fluids may include mixed brines, CO2, methane and other hydrocarbons. Critical questions relate to the interaction of these various fluid mixtures and compositions with the fractured substrate. Important needs are in understanding the roles of key processes (transmission, dissolution, precipitation, sorption and dynamic stressing) on the modification of effective stresses and their influence on the evolution of permeability, strength and induced seismicity on the resulting development of either wanted or unwanted fluid pathways. In situ experimentation has already contributed to addressing some crucial issues of these complex interactions at field scale. Important contributions are noted in understanding the fate and transport of long-lived wastes
Evaluating uncertainty in 7Be-based soil erosion estimates: an experimental plot approach
Blake, Will; Taylor, Alex; Abdelli, Wahid; Gaspar, Leticia; Barri, Bashar Al; Ryken, Nick; Mabit, Lionel
2014-05-01
Soil erosion remains a major concern for the international community and there is a growing need to improve the sustainability of agriculture to support future food security. High resolution soil erosion data are a fundamental requirement for underpinning soil conservation and management strategies but representative data on soil erosion rates are difficult to achieve by conventional means without interfering with farming practice and hence compromising the representativeness of results. Fallout radionuclide (FRN) tracer technology offers a solution since FRN tracers are delivered to the soil surface by natural processes and, where irreversible binding can be demonstrated, redistributed in association with soil particles. While much work has demonstrated the potential of short-lived 7Be (half-life 53 days), particularly in quantification of short-term inter-rill erosion, less attention has focussed on sources of uncertainty in derived erosion measurements and sampling strategies to minimise these. This poster outlines and discusses potential sources of uncertainty in 7Be-based soil erosion estimates and the experimental design considerations taken to quantify these in the context of a plot-scale validation experiment. Traditionally, gamma counting statistics have been the main element of uncertainty propagated and reported but recent work has shown that other factors may be more important such as: (i) spatial variability in the relaxation mass depth that describes the shape of the 7Be depth distribution for an uneroded point; (ii) spatial variability in fallout (linked to rainfall patterns and shadowing) over both reference site and plot; (iii) particle size sorting effects; (iv) preferential mobility of fallout over active runoff contributing areas. To explore these aspects in more detail, a plot of 4 x 35 m was ploughed and tilled to create a bare, sloped soil surface at the beginning of winter 2013/2014 in southwest UK. The lower edge of the plot was bounded by
International Nuclear Information System (INIS)
Attivissimo, F; Giaquinto, N; Savino, M; Cataldo, A
2012-01-01
This paper deals with the assessment of the uncertainty due to systematic errors, particularly in A/D conversion-based instruments. The problem of defining and assessing systematic errors is briefly discussed, and the conceptual scheme of gauge repeatability and reproducibility is adopted. A practical example regarding the evaluation of the uncertainty caused by the systematic offset error is presented. The experimental results, obtained under various ambient conditions, show that modelling the variability of systematic errors is more problematic than suggested by the ISO 5725 norm. Additionally, the paper demonstrates the substantial difference between the type B uncertainty evaluation, obtained via the maximum entropy principle applied to manufacturer's specifications, and the type A (experimental) uncertainty evaluation, which reflects actually observable reality. Although it is reasonable to assume a uniform distribution of the offset error, experiments demonstrate that the distribution is not centred and that a correction must be applied. In such a context, this work motivates a more pragmatic and experimental approach to uncertainty, with respect to the directions of supplement 1 of GUM. (paper)
International Nuclear Information System (INIS)
Lassahn, G.D.; Taylor, D.J.N.
1982-08-01
Analyses of uncertainty components inherent in pulsed-neutron-activation (PNA) measurements in general and the Loss-of-Fluid-Test (LOFT) system in particular are given. Due to the LOFT system's unique conditions, previously-used techniques were modified to make the volocity measurement. These methods render a useful, cost-effective measurement with an estimated uncertainty of 11% of reading
The Total Cross Section at the LHC: Models and Experimental Consequences
Cudell, J R
2010-01-01
I review the predictions of the total cross section for many models, and point out that some of them lead to the conclusion that the standard experimental analysis may lead to systematic errors much larger than expected.
Uncertainty and sensitivity analysis on probabilistic safety assessment of an experimental facility
International Nuclear Information System (INIS)
Burgazzi, L.
2000-01-01
The aim of this work is to perform an uncertainty and sensitivity analysis on the probabilistic safety assessment of the International Fusion Materials Irradiation Facility (IFMIF), in order to assess the effect on the final risk values of the uncertainties associated with the generic data used for the initiating events and component reliability and to identify the key quantities contributing to this uncertainty. The analysis is conducted on the expected frequency calculated for the accident sequences, defined through the event tree (ET) modeling. This is in order to increment credit to the ET model quantification, to calculate frequency distributions for the occurrence of events and, consequently, to assess if sequences have been correctly selected on the probability standpoint and finally to verify the fulfillment of the safety conditions. Uncertainty and sensitivity analysis are performed using respectively Monte Carlo sampling and an importance parameter technique. (author)
Javed, A.; Kamphues, E.; Hartuc, T.; Pecnik, R.; Van Buijtenen, J.P.
2015-01-01
The compressor impellers for mass-produced turbochargers are generally die-casted and machined to their final configuration. Manufacturing uncertainties are inherently introduced as stochastic dimensional deviations in the impeller geometry. These deviations eventually propagate into the compressor
International Nuclear Information System (INIS)
Monni, S.; Syri, S.; Pipatti, R.; Savolainen, I.
2007-01-01
Emissions trading in the European Union (EU), covering the least uncertain emission sources of greenhouse gas emission inventories (CO 2 from combustion and selected industrial processes in large installations), began in 2005. During the first commitment period of the Kyoto Protocol (2008-2012), the emissions trading between Parties to the Protocol will cover all greenhouse gases (CO 2 , CH 4 , N 2 O, HFCs, PFCs, and SF 6 ) and sectors (energy, industry, agriculture, waste, and selected land-use activities) included in the Protocol. In this paper, we estimate the uncertainties in different emissions trading schemes based on uncertainties in corresponding inventories. According to the results, uncertainty in emissions from the EU15 and the EU25 included in the first phase of the EU emissions trading scheme (2005-2007) is ±3% (at 95% confidence interval relative to the mean value). If the trading were extended to CH 4 and N 2 O, in addition to CO 2 , but no new emissions sectors were included, the tradable amount of emissions would increase by only 2% and the uncertainty in the emissions would range from -4 to +8%. Finally, uncertainty in emissions included in emissions trading under the Kyoto Protocol was estimated to vary from -6 to +21%. Inclusion of removals from forest-related activities under the Kyoto Protocol did not notably affect uncertainty, as the volume of these removals is estimated to be small
A real-time assessment of measurement uncertainty in the experimental characterization of sprays
International Nuclear Information System (INIS)
Panão, M R O; Moreira, A L N
2008-01-01
This work addresses the estimation of the measurement uncertainty of discrete probability distributions used in the characterization of sprays. A real-time assessment of this measurement uncertainty is further investigated, particularly concerning the informative quality of the measured distribution and the influence of acquiring additional information on the knowledge retrieved from statistical analysis. The informative quality is associated with the entropy concept as understood in information theory (Shannon entropy), normalized by the entropy of the most informative experiment. A new empirical correlation is derived between the error accuracy of a discrete cumulative probability distribution and the normalized Shannon entropy. The results include case studies using: (i) spray impingement measurements to study the applicability of the real-time assessment of measurement uncertainty, and (ii) the simulation of discrete probability distributions of unknown shape or function to test the applicability of the new correlation
International Nuclear Information System (INIS)
Chilenski, M.A.; Greenwald, M.; Howard, N.T.; White, A.E.; Rice, J.E.; Walk, J.R.; Marzouk, Y.
2015-01-01
The need to fit smooth temperature and density profiles to discrete observations is ubiquitous in plasma physics, but the prevailing techniques for this have many shortcomings that cast doubt on the statistical validity of the results. This issue is amplified in the context of validation of gyrokinetic transport models (Holland et al 2009 Phys. Plasmas 16 052301), where the strong sensitivity of the code outputs to input gradients means that inadequacies in the profile fitting technique can easily lead to an incorrect assessment of the degree of agreement with experimental measurements. In order to rectify the shortcomings of standard approaches to profile fitting, we have applied Gaussian process regression (GPR), a powerful non-parametric regression technique, to analyse an Alcator C-Mod L-mode discharge used for past gyrokinetic validation work (Howard et al 2012 Nucl. Fusion 52 063002). We show that the GPR techniques can reproduce the previous results while delivering more statistically rigorous fits and uncertainty estimates for both the value and the gradient of plasma profiles with an improved level of automation. We also discuss how the use of GPR can allow for dramatic increases in the rate of convergence of uncertainty propagation for any code that takes experimental profiles as inputs. The new GPR techniques for profile fitting and uncertainty propagation are quite useful and general, and we describe the steps to implementation in detail in this paper. These techniques have the potential to substantially improve the quality of uncertainty estimates on profile fits and the rate of convergence of uncertainty propagation, making them of great interest for wider use in fusion experiments and modelling efforts. (paper)
A linear programming approach to characterizing norm bounded uncertainty from experimental data
Scheid, R. E.; Bayard, D. S.; Yam, Y.
1991-01-01
The linear programming spectral overbounding and factorization (LPSOF) algorithm, an algorithm for finding a minimum phase transfer function of specified order whose magnitude tightly overbounds a specified nonparametric function of frequency, is introduced. This method has direct application to transforming nonparametric uncertainty bounds (available from system identification experiments) into parametric representations required for modern robust control design software (i.e., a minimum-phase transfer function multiplied by a norm-bounded perturbation).
Effects of uncertainties of experimental data in the benchmarking of a computer code
International Nuclear Information System (INIS)
Meulemeester, E. de; Bouffioux, P.; Demeester, J.
1980-01-01
Fuel rod performance modelling is sometimes taken in an academical way. The experience of the COMETHE code development since 1967 has clearly shown that benchmarking was the most important part of modelling development. Unfortunately, it requires well characterized data. Although, the two examples presented here were not intended for benchmarking, as the COMETHE calculations were only performed for an interpretation of the results, they illustrate the effects of a lack of fuel characterization and of the power history uncertainties
An experimental study on total dose effects in SRAM-based FPGAs
International Nuclear Information System (INIS)
Yao Zhibin; He Baoping; Zhang Fengqi; Guo Hongxia; Luo Yinhong; Wang Yuanming; Zhang Keying
2009-01-01
In order to study testing methods and find sensitive parameters in total dose effects on SRAM-based FPGA, XC2S100 chips were irradiated by 60 Co γ-rays and tested with two test circuit designs. By analyzing the experimental results, the test flow of configuration RAM and bock RAM was given, and the most sensitive parameter was obtained. The results will be a solid foundation for establishing test specification and evaluation methods of total dose effects on SRAM-based FPGAs. (authors)
Energy Technology Data Exchange (ETDEWEB)
Yamaji, Bogdan; Aszodi, Attila [Budapest University of Technology and Economics (Hungary). Inst. of Nuclear Techniques
2016-09-15
In the paper measurement results from the experimental modelling of a molten salt reactor concept will be presented along with detailed uncertainty analysis of the experimental system. Non-intrusive flow measurements are carried out on the scaled and segmented mock-up of a homogeneous, single region molten salt fast reactor concept. Uncertainty assessment of the particle image velocimetry (PIV) measurement system applied with the scaled and segmented model is presented in detail. The analysis covers the error sources of the measurement system (laser, recording camera, etc.) and the specific conditions (de-warping of measurement planes) originating in the geometry of the investigated domain. Effect of sample size in the ensemble averaged PIV measurements is discussed as well. An additional two-loop-operation mode is also presented and the analysis of the measurement results confirm that without enhancement nominal and other operation conditions will lead to strong unfavourable separation in the core flow. It implies that use of internal flow distribution structures will be necessary for the optimisation of the core coolant flow. Preliminary CFD calculations are presented to help the design of a perforated plate located above the inlet region. The purpose of the perforated plate is to reduce recirculation near the cylindrical wall and enhance the uniformity of the core flow distribution.
International Nuclear Information System (INIS)
Kugo, Teruhiko; Mori, Takamasa; Kojima, Kensuke; Takeda, Toshikazu
2007-01-01
We have carried out the critical experiments for the MOX fueled tight lattice LWR cores using FCA facility and constructed the XXII-1 series cores. Utilizing the critical experiments carried out at FCA, we have evaluated the reduction of prediction uncertainty in the coolant void reactivity worth of the breeding LWR core based on the bias factor method with focusing on the prediction uncertainty due to cross section errors. In the present study, we have introduced a concept of a virtual experimental value into the conventional bias factor method to overcome a problem caused by the conventional bias factor method in which the prediction uncertainty increases in the case that the experimental core has the opposite reactivity worth and the consequent opposite sensitivity coefficients to the real core. To extend the applicability of the bias factor method, we have adopted an exponentiated experimental value as the virtual experimental value and formulated the prediction uncertainty reduction by the use of the bias factor method extended by the concept of the virtual experimental value. From the numerical evaluation, it has been shown that the prediction uncertainty due to cross section errors has been reduced by the use of the concept of the virtual experimental value. It is concluded that the introduction of virtual experimental value can effectively utilize experimental data and extend applicability of the bias factor method. (author)
Experimental Evaluation of a Total Heat Recovery Unit with Polymer Membrane Foils
DEFF Research Database (Denmark)
Fang, Lei; Yuan, Shu; Nie, Jinzhe
2014-01-01
A laboratory experimental study was conducted to investigate the energy performance of a total heat recovery unit using a polymer membranes heat exchanger. The study was conducted in twin climate chambers. One of the chambers simulated outdoor climate conditions and the other simulated the climate...... condition indoors. The airflows taken from the two chambers were connected into the total heat recovery unit and exchange heat in a polymer membrane foil heat exchanger installed inside the unit. The temperature and humidity of the air upstream and downstream of the heat exchanger were measured. Based...... on the measured temperature and humidity values, the temperature, humidity, and enthalpy efficiencies of the total heat recovery unit were calculated. The experiment was conducted in different combinations of outdoor climate conditions simulating warm and humid outdoor climates and air-conditioned indoor climate...
Influence of conformity on the wear of total knee replacement: An experimental study.
Brockett, Claire L; Carbone, Silvia; Fisher, John; Jennings, Louise M
2018-02-01
Wear of total knee replacement continues to be a significant factor influencing the clinical longevity of implants. Historically, failure due to delamination and fatigue directed design towards more conforming inserts to reduce contact stress. As new generations of more oxidatively stable polyethylene have been developed, more flexibility in bearing design has been introduced. The aim of this study was to investigate the effect of insert conformity on the wear performance of a fixed bearing total knee replacement through experimental simulation. Two geometries of insert were studied under standard gait conditions. There was a significant reduction in wear with reducing implant conformity. This study has demonstrated that bearing conformity has a significant impact on the wear performance of a fixed bearing total knee replacement, providing opportunities to improve clinical performance through enhanced material and design selection.
Directory of Open Access Journals (Sweden)
Kelly C. Chang
2017-11-01
Full Text Available The Comprehensive in vitro Proarrhythmia Assay (CiPA is a global initiative intended to improve drug proarrhythmia risk assessment using a new paradigm of mechanistic assays. Under the CiPA paradigm, the relative risk of drug-induced Torsade de Pointes (TdP is assessed using an in silico model of the human ventricular action potential (AP that integrates in vitro pharmacology data from multiple ion channels. Thus, modeling predictions of cardiac risk liability will depend critically on the variability in pharmacology data, and uncertainty quantification (UQ must comprise an essential component of the in silico assay. This study explores UQ methods that may be incorporated into the CiPA framework. Recently, we proposed a promising in silico TdP risk metric (qNet, which is derived from AP simulations and allows separation of a set of CiPA training compounds into Low, Intermediate, and High TdP risk categories. The purpose of this study was to use UQ to evaluate the robustness of TdP risk separation by qNet. Uncertainty in the model parameters used to describe drug binding and ionic current block was estimated using the non-parametric bootstrap method and a Bayesian inference approach. Uncertainty was then propagated through AP simulations to quantify uncertainty in qNet for each drug. UQ revealed lower uncertainty and more accurate TdP risk stratification by qNet when simulations were run at concentrations below 5× the maximum therapeutic exposure (Cmax. However, when drug effects were extrapolated above 10× Cmax, UQ showed that qNet could no longer clearly separate drugs by TdP risk. This was because for most of the pharmacology data, the amount of current block measured was <60%, preventing reliable estimation of IC50-values. The results of this study demonstrate that the accuracy of TdP risk prediction depends both on the intrinsic variability in ion channel pharmacology data as well as on experimental design considerations that preclude an
Statistical Information and Uncertainty: A Critique of Applications in Experimental Psychology
Directory of Open Access Journals (Sweden)
Donald Laming
2010-04-01
Full Text Available This paper presents, first, a formal exploration of the relationships between information (statistically defined, statistical hypothesis testing, the use of hypothesis testing in reverse as an investigative tool, channel capacity in a communication system, uncertainty, the concept of entropy in thermodynamics, and Bayes’ theorem. This exercise brings out the close mathematical interrelationships between different applications of these ideas in diverse areas of psychology. Subsequent illustrative examples are grouped under (a the human operator as an ideal communications channel, (b the human operator as a purely physical system, and (c Bayes’ theorem as an algorithm for combining information from different sources. Some tentative conclusions are drawn about the usefulness of information theory within these different categories. (a The idea of the human operator as an ideal communications channel has long been abandoned, though it provides some lessons that still need to be absorbed today. (b Treating the human operator as a purely physical system provides a platform for the quantitative exploration of many aspects of human performance by analogy with the analysis of other physical systems. (c The use of Bayes’ theorem to calculate the effects of prior probabilities and stimulus frequencies on human performance is probably misconceived, but it is difficult to obtain results precise enough to resolve this question.
International Nuclear Information System (INIS)
Uosif, M.A.; El-Taher, A.
2005-01-01
A new fit function has been developed to calculate theoretically the absolute gamma ray detection efficiencies (ηTh) of a cylindrical NaI(Tl) crystal, for calculating the absolute efficiency at any interesting gamma energy in the energy range between 10 and 1300 keV and distance between 0 and 8 cm. The total absolute gamma ray detection efficiencies have been calculated for five detectors, four are 2x2 and one is 3x 3 inches NaI(Tl) crystal at different distances. The absolute efficiency of the different detectors was calculated at the specific energy of the standard sources for each measuring distances. In this calculation, experimental (ηExp) and theoretical (ηTh) have been calculated. The uncertainties of efficiency calibration have been calculated also for quality control. Measurements were performed with calibrated point source. Gamma-ray energies under consideration were 0.356, 0.662, 1.17 and 1.33 MeV. The differences between (ηExp) and (ηTh) at these energies are 1.30E-06, 7.99E-05, 2.29E-04 and 2.42E-04 respectively. The results obtained on the basis of (ηExp) and (ηTh) seem to be in very good agreement
Experimental quantification of contact forces with impact, friction and uncertainty analysis
DEFF Research Database (Denmark)
Lahriri, Said; Santos, Ilmar
2013-01-01
and whirl motions in rotor-stator contact investigations. Dry friction coefficient is therefore estimated using two different experimental setups: (a) standard pin-on-disk tests and (b) rotor impact test rig fully instrumented. The findings in both setups indicate that the dry friction coefficient for brass......During rotor-stator contact dry friction plays a significant role in terms of reversing the rotor precession. The frictional force causes an increase in the rotor's tangential velocity in the direction opposite to that of the angular velocity. This effect is crucial for defining ranges of dry whip......-aluminum configuration significantly varies in a range of 0.16-0.83. The rotor enters a full annular contact mode shortly after two impacts with a contact duration of approximately 0.004 s at each location. It is experimentally demonstrated that the friction force is not present when the rotor enters a full annular...
Giardina, G.; Mandaglio, G.; Nasirov, A. K.; Anastasi, A.; Curciarello, F.; Fazio, G.
2018-02-01
Experimental and theoretical results of the PCN fusion probability of reactants in the entrance channel and the Wsur survival probability against fission at deexcitation of the compound nucleus formed in heavy-ion collisions are discussed. The theoretical results for a set of nuclear reactions leading to formation of compound nuclei (CNs) with the charge number Z = 102- 122 reveal a strong sensitivity of PCN to the characteristics of colliding nuclei in the entrance channel, dynamics of the reaction mechanism, and excitation energy of the system. We discuss the validity of assumptions and procedures for analysis of experimental data, and also the limits of validity of theoretical results obtained by the use of phenomenological models. The comparison of results obtained in many investigated reactions reveals serious limits of validity of the data analysis and calculation procedures.
International Nuclear Information System (INIS)
Silva, T.A. da.
1981-06-01
A quantitative comparison of component calibration factors with the corresponding overall calibration factor was used to evaluate the adopted component calibration procedure in regard to parasitic elements. Judgement of significance is based upon the experimental uncertainty of a well established procedure for determination of the overall calibration factor. The experimental results obtained for different ionization chambers and different electrometers demonstrate that for one type of electrometer the parasitic elements have no influence on its sensitivity considering the experimental uncertainty of the calibration procedures. In this case the adopted procedure for determination of component calibration factors is considered to be equivalent to the procedure of determination of the overall calibration factor and thus might be used as a strong quality control measure in routine calibration. (Author) [pt
Energy Technology Data Exchange (ETDEWEB)
Shahnam, Mehrdad [National Energy Technology Lab. (NETL), Morgantown, WV (United States). Research and Innovation Center, Energy Conversion Engineering Directorate; Gel, Aytekin [ALPEMI Consulting, LLC, Phoeniz, AZ (United States); Subramaniyan, Arun K. [GE Global Research Center, Niskayuna, NY (United States); Musser, Jordan [National Energy Technology Lab. (NETL), Morgantown, WV (United States). Research and Innovation Center, Energy Conversion Engineering Directorate; Dietiker, Jean-Francois [West Virginia Univ. Research Corporation, Morgantown, WV (United States)
2017-10-02
Adequate assessment of the uncertainties in modeling and simulation is becoming an integral part of the simulation based engineering design. The goal of this study is to demonstrate the application of non-intrusive Bayesian uncertainty quantification (UQ) methodology in multiphase (gas-solid) flows with experimental and simulation data, as part of our research efforts to determine the most suited approach for UQ of a bench scale fluidized bed gasifier. UQ analysis was first performed on the available experimental data. Global sensitivity analysis performed as part of the UQ analysis shows that among the three operating factors, steam to oxygen ratio has the most influence on syngas composition in the bench-scale gasifier experiments. An analysis for forward propagation of uncertainties was performed and results show that an increase in steam to oxygen ratio leads to an increase in H2 mole fraction and a decrease in CO mole fraction. These findings are in agreement with the ANOVA analysis performed in the reference experimental study. Another contribution in addition to the UQ analysis is the optimization-based approach to guide to identify next best set of additional experimental samples, should the possibility arise for additional experiments. Hence, the surrogate models constructed as part of the UQ analysis is employed to improve the information gain and make incremental recommendation, should the possibility to add more experiments arise. In the second step, series of simulations were carried out with the open-source computational fluid dynamics software MFiX to reproduce the experimental conditions, where three operating factors, i.e., coal flow rate, coal particle diameter, and steam-to-oxygen ratio, were systematically varied to understand their effect on the syngas composition. Bayesian UQ analysis was performed on the numerical results. As part of Bayesian UQ analysis, a global sensitivity analysis was performed based on the simulation results, which shows
International Nuclear Information System (INIS)
Martinelli, T.; Panini, G.C.; Amoroso, A.
1989-11-01
Information about systematic errors are not given In EXFOR, the data base of nuclear experimental measurements: their assessment is committed to the ability of the evaluator. A tool Is needed which performs this task in a fully automatic way or, at least, gives a valuable aid. The expert system ERESYE has been implemented for investigating the feasibility of an automatic evaluation of the systematic errors in the experiments. The features of the project which led to the implementation of the system are presented. (author)
International Nuclear Information System (INIS)
Blanc-De-Lanaute, N.
2012-01-01
The main objectives of this research thesis are the management and reduction of uncertainties associated with measurements performed by means of a fission-chamber type sensor. The author first recalls the role of experimental reactors in nuclear research, presents the various sensors used in nuclear detection (photographic film, scintillation sensor, gas ionization sensor, semiconducting sensor, other types of radiation sensors), and more particularly addresses neutron detection (activation sensor, gas filling sensor). In a second part, the author gives an overview of the state of the art of neutron measurement by fission chamber in a mock-up reactor (signal formation, processing and post-processing, associated measurements and uncertainties, return on experience of measurements by fission chamber on Masurca and Minerve research reactors). In a third part, he reports the optimization of two intrinsic parameters of this sensor: the thickness of fissile material deposit, and the pressure and nature of the filler gas. The fourth part addresses the improvement of measurement electronics and of post-processing methods which are used for result analysis. The fifth part deals with the optimization of spectrum index measurements by means of a fission chamber. The impact of each parameter is quantified. Results explain some inconsistencies noticed in measurements performed on the Minerve reactor in 2004, and allow the improvement of biases with computed values [fr
Steuten, Lotte Maria Gertruda; Vallejo-Torres, Laura; Bastide, Philippe; Buxton, Martin J.
2009-01-01
This paper presents a relatively simple cost model comparing the costs of using a commercial fibrin sealant (QUIXIL®) in addition to conventional haemostatic treatment vs. conventional treatment alone in total knee replacement (TKR) surgery, and demonstrates and discusses how one- and two-way
CERN. Geneva
2014-01-01
Future experiments propose to make precision measurements of parameters in the neutrino mixing matrix, including the possibly maximal mixing angle theta23, and an unknown CP violating phase, dCP, by comparing the event rate of neutrinos and antineutrinos observed close to, and far from the source. Such "near to far" extrapolation methods must achieve percent level understanding of neutrino and antineutrino interactions; the interaction determines the relationship between experimental observables and the oscillation probability which depends on the neutrino energy. However, recent developments over the last 5 years demonstrate that our understanding of neutrino interactions is insufficient. In particular, the interaction of neutrinos on correlated pairs of nucleons has only recently been added to neutrino interaction simulations. The identification of these processes as interactions on a single nucleon results in a significant bias to the measured mixing parameters, even when near detector i...
Elevated lip liner positions improving stability in total hip arthroplasty. An experimental study.
Directory of Open Access Journals (Sweden)
Suleman Qurashi
2018-01-01
Full Text Available Background: The use of elevated lip polyethylene liners with the acetabular component is relatively common in Total Hip Arthroplasty (THA. Elevated lip liners increase stability of the THA by increasing the jump distance in one direction. However, the elevated lip, conversely, also reduces the primary arc in the opposite direction and leads to early impingement of the neck on the elevated lip, potentially causing instability. The aim of the present study is to determine the total range of motion of the femoral head component within the acetabular component with the elevated lip liner in different orientations within the acetabular cup. Methods: We introduce a novel experimental (ex-vivo framework for studying the effects lip liner orientation on the range of motion of the femoral component. For constant acetabular cup orientation, the elevated lip liner was positioned superiorly and inferiorly. The femoral component range of motion in the coronal, sagittal and axial plane was measured. To avoid any confounding influences of out of plane motion, the femoral component was constrained to move in the tested plane. Results: This experimental set up introduces a rigorous framework in which to test the effects of elevated lip liner orientations on the range of motion of the femoral head component in abduction, adduction, flexion, extension and rotation. The movements of this experimental set-up are directly informative of patient’s maximum potential post-operative range of motion. Initial results show that an inferior placement of the elevated lip increases the effective superior lateral range of motion (abduction for the femoral component, whilst the anatomy of the patient (i.e. their other leg prevents the point of femoral component – acetabular lip impingement being reached (in adduction.
International Nuclear Information System (INIS)
Khayat, Omid; Afarideh, Hossein; Mohammadnia, Meisam
2015-01-01
In the solid state nuclear track detectors of chemically etched type, nuclear tracks with center-to-center neighborhood of distance shorter than two times the radius of tracks will emerge as overlapping tracks. Track overlapping in this type of detectors causes tracks count losses and it becomes rather severe in high track densities. Therefore, tracks counting in this condition should include a correction factor for count losses of different tracks overlapping orders since a number of overlapping tracks may be counted as one track. Another aspect of the problem is the cases where imaging the whole area of the detector and counting all tracks are not possible. In these conditions a statistical generalization method is desired to be applicable in counting a segmented area of the detector and the results can be generalized to the whole surface of the detector. Also there is a challenge in counting the tracks in densely overlapped tracks because not sufficient geometrical or contextual information are available. It this paper we present a statistical counting method which gives the user a relation between the tracks overlapping probabilities on a segmented area of the detector surface and the total number of tracks. To apply the proposed method one can estimate the total number of tracks on a solid state detector of arbitrary shape and dimensions by approximating the tracks averaged area, whole detector surface area and some orders of tracks overlapping probabilities. It will be shown that this method is applicable in high and ultra high density tracks images and the count loss error can be enervated using a statistical generalization approach. - Highlights: • A correction factor for count losses of different tracks overlapping orders. • For the cases imaging the whole area of the detector is not possible. • Presenting a statistical generalization method for segmented areas. • Giving a relation between the tracks overlapping probabilities and the total tracks
International Nuclear Information System (INIS)
Amharrak, H.
2012-01-01
This thesis work focuses on the needs for qualification of neutron and photonics calculation schemes in the future Jules Horowitz technological Reactor (RJH) and Pressurized Water Reactors (PWR). It is necessary to establish reliable measurement results with well defined associated uncertainties, for qualification and/or validation. The objective of this thesis is to develop and to improve the nuclear heating measurement methods (especially gamma photons) in MINERVE and EOLE experimental reactors at CEA-Cadarache, using thermo-luminescent detectors (TLD), optically stimulated luminescence detectors (OSLD) and an ionization chamber. It is to identify, prioritize, treat and reduce the various sources of uncertainty and systematic bias associated with the measurement. In a previous study, where nuclear heating was estimated from an integrated radiation dose by TLD in MINERVE and EOLE reactors, it has been shown that dose calculation underestimated the experiment by 25% with a total uncertainty of 15% (2σ). This systematic bias observed has been largely attributed to a lack of nuclear data used to perform the calculations. Therefore, in this work a new series of experiments was set up in the MINERVE reactor to reduce the measurement uncertainties, and better understand the origins of the discrepancies with the modeling. These experiments were carried out in an aluminum or hafnium surrounding (in specifically designed boxes) using a new procedure and analysis methodology. In these experiments, the TLD are calibrated individually, the repeatability of the measurement is experimentally evaluated and the laws of TLD heat are optimized. These improvements are subsequently used for the measurement of nuclear heating in AMMON program (EOLE reactor), dedicated to the qualification of neutron and photonics calculation schemes in the RJH reactor. The measurements of the gamma emitted, with a delay (delayed gamma) after shutdown of the MINERVE reactor, were also carried out
Total glucosides of peony attenuates experimental autoimmune encephalomyelitis in C57BL/6 mice.
Huang, Qiling; Ma, Xiaomeng; Zhu, Dong Liang; Chen, Li; Jiang, Ying; Zhou, Linli; Cen, Lei; Pi, Rongbiao; Chen, Xiaohong
2015-07-15
Total glucosides of peony (TGP), an active compound extracted from the roots of Paeonia lactiflora Pall, has wide pharmacological effects on nervous system. Here we examined the effects of TGP on experimental autoimmune encephalomyelitis (EAE), an established model of multiple sclerosis (MS). The results showed that TGP can reduce the severity and progression of EAE in C57 BL/6 mice. In addition, TGP also down-regulated the Th1/Th17 inflammatory response and prevented the reduced expression of brain-derived neurotrophic factor and 2',3'-cyclic nucleotide 3'-phosphodiesterase of EAE. These findings suggest that TGP could be a potential therapeutic agent for MS. Copyright © 2015 Elsevier B.V. All rights reserved.
Experimental device for obtaining calibration factor for the total count technique
International Nuclear Information System (INIS)
Gonçalves, Eduardo R.; Braz, Delson; Brandão, Luís Eduardo B.
2017-01-01
Nuclear technologies have widely used on industry plants in order to help to solve troubles processes/design or just obtain information of them. The Total Count technique for flow measurement has as main advantages: being an absolute technique, because it is independent of additional devices readings unless the directly used for recording the radioactive cloud, requiring only a single detector to provide the final result; the independence of the internal volume of the transport duct, can be applied in the presence or absence of obstructions; no restriction as to the nature of the product or material to be conveyed; it is a noninvasive technique which allows real-time diagnostics. To use Total Count Technique, knowledge of a geometric calibration factor is required. Called Factor F, it is obtained in the laboratory using an experimental apparatus to faithfully reproduce the geometry of the detection system and the pipeline that being analyzed and using the same radiotracer, therefore, its value is constant for each specific measuring system under survey. This experimental apparatus for obtaining the factor F consisting by a pipe of 2 ″PVC, which simulates a transmission line, where they were deposited 500 ml oil and the use of a specific pipette for use viscous fluids were added sequentially aliquots (50.00 ± 0.01) μl radiotracer (radionuclide photopeak energy of 198 Au 411.8 keV) and analyzing data obtained by three distinct detection systems composed of detectors NaI scintillators 1″ x 1 ″ and a data acquisition system. (author)
Experimental device for obtaining calibration factor for the total count technique
Energy Technology Data Exchange (ETDEWEB)
Gonçalves, Eduardo R.; Braz, Delson [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Brandão, Luís Eduardo B. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Divisao de Reatores
2017-07-01
Nuclear technologies have widely used on industry plants in order to help to solve troubles processes/design or just obtain information of them. The Total Count technique for flow measurement has as main advantages: being an absolute technique, because it is independent of additional devices readings unless the directly used for recording the radioactive cloud, requiring only a single detector to provide the final result; the independence of the internal volume of the transport duct, can be applied in the presence or absence of obstructions; no restriction as to the nature of the product or material to be conveyed; it is a noninvasive technique which allows real-time diagnostics. To use Total Count Technique, knowledge of a geometric calibration factor is required. Called Factor F, it is obtained in the laboratory using an experimental apparatus to faithfully reproduce the geometry of the detection system and the pipeline that being analyzed and using the same radiotracer, therefore, its value is constant for each specific measuring system under survey. This experimental apparatus for obtaining the factor F consisting by a pipe of 2 ″PVC, which simulates a transmission line, where they were deposited 500 ml oil and the use of a specific pipette for use viscous fluids were added sequentially aliquots (50.00 ± 0.01) μl radiotracer (radionuclide photopeak energy of 198 Au 411.8 keV) and analyzing data obtained by three distinct detection systems composed of detectors NaI scintillators 1″ x 1 ″ and a data acquisition system. (author)
Oglesby, Mary E; Schmidt, Norman B
2017-07-01
Intolerance of uncertainty (IU) has been proposed as an important transdiagnostic variable within mood- and anxiety-related disorders. The extant literature has suggested that individuals high in IU interpret uncertainty more negatively. Furthermore, theoretical models of IU posit that those elevated in IU may experience an uncertain threat as more anxiety provoking than a certain threat. However, no research to date has experimentally manipulated the certainty of an impending threat while utilizing an in vivo stressor. In the current study, undergraduate participants (N = 79) were randomized to one of two conditions: certain threat (participants were told that later on in the study they would give a 3-minute speech) or uncertain threat (participants were told that later on in the study they would flip a coin to determine whether or not they would give a 3-minute speech). Participants also completed self-report questionnaires measuring their baseline state anxiety, baseline trait IU, and prespeech state anxiety. Results indicated that trait IU was associated with greater state anticipatory anxiety when the prospect of giving a speech was made uncertain (i.e., uncertain condition). Further, findings indicated no significant difference in anticipatory state anxiety among individuals high in IU when comparing an uncertain versus certain threat (i.e., uncertain and certain threat conditions, respectively). Furthermore, results found no significant interaction between condition and trait IU when predicting state anticipatory anxiety. This investigation is the first to test a crucial component of IU theory while utilizing an ecologically valid paradigm. Results of the present study are discussed in terms of theoretical models of IU and directions for future work. Copyright © 2017. Published by Elsevier Ltd.
Transanal total mesorectal excision: a systematic review of the experimental and clinical evidence.
Araujo, S E; Crawshaw, B; Mendes, C R; Delaney, C P
2015-02-01
Achieving a clear distal or circumferential resection margins with laparoscopic total mesorectal excision (TME) may be laborious, especially in obese males and when operating on advanced distal rectal tumors with a poor response to neoadjuvant treatment. Transanal (TaTME) is a new natural orifice translumenal endoscopic surgery modality in which the rectum is mobilized transanally using endoscopic techniques with or without laparoscopic assistance. We conducted a comprehensive systematic review of publications on this new technique in PubMed and Embase databases from January, 2008, to July, 2014. Experimental and clinical studies written in English were included. Experimental research with TaTME was done on pigs with and without survival models and on human cadavers. In these studies, laparoscopic or transgastric assistance was frequently used resulting in an easier upper rectal dissection and in a longer rectal specimen. To date, 150 patients in 16 clinical studies have undergone TaTME. In all but 15 cases, transabdominal assistance was used. A rigid transanal endoscopic operations/transanal endoscopic microsurgery (TEO/TEM) platform was used in 37 patients. Rectal adenocarcinoma was the indication in all except for nine cases of benign diseases. Operative times ranged from 90 to 460 min. TME quality was deemed intact, satisfactory, or complete. Involvement in circumferential resection margins was detected in 16 (11.8 %) patients. The mean lymph node harvest was equal or greater than 12 in all studies. Regarding morbidity, pneumoretroperitoneum, damage to the urethra, and air embolism were reported intraoperatively. Mean hospital stay varied from 4 to 14 days. Postoperative complications occurred in 34 (22.7 %) patients. TaTME with TEM is feasible in selected cases. Oncologic safety parameters seem to be adequate although the evidence relies on small retrospective series conducted by highly trained surgeons. Further studies are expected.
Justification for recommended uncertainties
International Nuclear Information System (INIS)
Pronyaev, V.G.; Badikov, S.A.; Carlson, A.D.
2007-01-01
The uncertainties obtained in an earlier standards evaluation were considered to be unrealistically low by experts of the US Cross Section Evaluation Working Group (CSEWG). Therefore, the CSEWG Standards Subcommittee replaced the covariance matrices of evaluated uncertainties by expanded percentage errors that were assigned to the data over wide energy groups. There are a number of reasons that might lead to low uncertainties of the evaluated data: Underestimation of the correlations existing between the results of different measurements; The presence of unrecognized systematic uncertainties in the experimental data can lead to biases in the evaluated data as well as to underestimations of the resulting uncertainties; Uncertainties for correlated data cannot only be characterized by percentage uncertainties or variances. Covariances between evaluated value at 0.2 MeV and other points obtained in model (RAC R matrix and PADE2 analytical expansion) and non-model (GMA) fits of the 6 Li(n,t) TEST1 data and the correlation coefficients are presented and covariances between the evaluated value at 0.045 MeV and other points (along the line or column of the matrix) as obtained in EDA and RAC R matrix fits of the data available for reactions that pass through the formation of the 7 Li system are discussed. The GMA fit with the GMA database is shown for comparison. The following diagrams are discussed: Percentage uncertainties of the evaluated cross section for the 6 Li(n,t) reaction and the for the 235 U(n,f) reaction; estimation given by CSEWG experts; GMA result with full GMA database, including experimental data for the 6 Li(n,t), 6 Li(n,n) and 6 Li(n,total) reactions; uncertainties in the GMA combined fit for the standards; EDA and RAC R matrix results, respectively. Uncertainties of absolute and 252 Cf fission spectrum averaged cross section measurements, and deviations between measured and evaluated values for 235 U(n,f) cross-sections in the neutron energy range 1
Wu, Yonggui; Ren, Kejun; Liang, Chao; Yuan, Liang; Qi, Xiangming; Dong, Jing; Shen, Jijia; Lin, Shanyan
2009-01-01
Total glucosides of paeony (TGP), extracted from the root of Paeonia lactiflora pall, has been shown to have ant-inflammatory and antioxidative actions. The aims of this study were to elucidate the renoprotective effect of TGP and its mechanism in experimental diabetes. Streptozotocin-induced diabetic rats were treated with TGP for 8 weeks. Treatment with TGP at 50, 100, and 200 mg/kg significantly lowered 24-h urinary albumin excretion rate in diabetic rats. TGP treatment in all doses markedly attenuated glomerular volume, and treatment with TGP at 100 and 200 mg/kg markedly reduced indices for tubulointerstitial injury in diabetic rats. Western blot analysis showed that the expressions of 1 alpha (IV) collagen, intercellular adhesion molecule (ICAM)-1, interleukin (IL)-1, tumor necrosis factor (TNF)-alpha, NF-kappaB p65, and 3-nitrotyrosine (3-NT) protein were increased in the kidneys of diabetic rats; the increases in these proteins were all dose-dependently and significantly inhibited by TGP treatment. The expression of nephrin protein was significantly reduced in the kidneys from diabetic rats and markedly increased by TGP treatment. The expression of transforming growth factor (TGF)-beta1 protein in the kidney was also significantly increased in diabetic rats, which was significantly inhibited by treatment with TGP at all doses. Our data suggest that TGP treatment ameliorates early renal injury via the inhibition of expression of ICAM-1, IL-1, TNF-alpha, and 3-NT in the kidneys of diabetic rats.
Instrument uncertainty predictions
International Nuclear Information System (INIS)
Coutts, D.A.
1991-07-01
The accuracy of measurements and correlations should normally be provided for most experimental activities. The uncertainty is a measure of the accuracy of a stated value or equation. The uncertainty term reflects a combination of instrument errors, modeling limitations, and phenomena understanding deficiencies. This report provides several methodologies to estimate an instrument's uncertainty when used in experimental work. Methods are shown to predict both the pretest and post-test uncertainty
Vesselinov, V. V.
2017-12-01
Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical species. Numerous geochemical constituents and processes may need to be simulated in these models which further complicates the analyses. As a result, these types of model analyses are typically extremely challenging. Here, we demonstrate a new contaminant source identification approach that performs decomposition of the observation mixtures based on Nonnegative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. We also demonstrate how NMFk can be extended to perform uncertainty quantification and experimental design related to real-world site characterization. The NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios). The NMFk algorithm has been extensively tested on synthetic datasets; NMFk analyses have been actively performed on real-world data collected at the Los Alamos National
Sunwoo, Y.; Park, J.; Kim, S.; Ma, Y.; Chang, I.
2010-12-01
Northeast Asia hosts more than one third of world population and the emission of pollutants trends to increase rapidly, because of economic growth and the increase of the consumption in high energy intensity. In case of air pollutants, especially, its characteristics of emissions and transportation become issued nationally, in terms of not only environmental aspects, but also long-range transboundary transportation. In meteorological characteristics, westerlies area means what air pollutants that emitted from China can be delivered to South Korea. Therefore, considering meteorological factors can be important to understand air pollution phenomena. In this study, we used MM5(Fifth-Generation Mesoscale Model) and WRF(Weather Research and Forecasting Model) to produce the meteorological fields. We analyzed the feature of physics option in each model and the difference due to characteristic of WRF and MM5. We are trying to analyze the uncertainty of source-receptor relationships for total nitrate according to meteorological fields in the Northeast Asia. We produced the each meteorological fields that apply the same domain, same initial and boundary conditions, the best similar physics option. S-R relationships in terms of amount and fractional number for total nitrate (sum of N from HNO3, nitrate and PAN) were calculated by EMEP method 3.
Medalie, Laura
2016-12-20
The U.S. Geological Survey, in cooperation with the New England Interstate Water Pollution Control Commission and the Vermont Department of Environmental Conservation, estimated daily and 9-month concentrations and fluxes of total and dissolved phosphorus, total nitrogen, chloride, and total suspended solids from 1990 (or first available date) through 2014 for 18 tributaries of Lake Champlain. Estimates of concentration and flux, provided separately in Medalie (2016), were made by using the Weighted Regressions on Time, Discharge, and Season (WRTDS) regression model and update previously published WRTDS model results with recent data. Assessment of progress towards meeting phosphorus-reduction goals outlined in the Lake Champlain management plan relies on annual estimates of phosphorus flux. The percent change in annual concentration and flux is provided for two time periods. The R package EGRETci was used to estimate the uncertainty of the trend estimate. Differences in model specification and function between this study and previous studies that used WRTDS to estimate concentration and flux using data from Lake Champlain tributaries are described. Winter data were too sparse and nonrepresentative to use for estimates of concentration and flux but were sufficient for estimating the percentage of total annual flux over the period of record. Median winter-to-annual fractions ranged between 21 percent for total suspended solids and 27 percent for dissolved phosphorus. The winter contribution was largest for all constituents from the Mettawee River and smallest from the Ausable River. For the full record (1991 through 2014 for total and dissolved phosphorus and chloride and 1993 through 2014 for nitrogen and total suspended solids), 6 tributaries had decreasing trends in concentrations of total phosphorus, and 12 had increasing trends; concentrations of dissolved phosphorus decreased in 6 and increased in 8 tributaries; fluxes of total phosphorus decreased in 5 and
An experimental study of the effect of total lymphoid irradiation on the survival of skin allografts
International Nuclear Information System (INIS)
Park, Charn Il; Han, Man Chung
1981-01-01
The study was undertaken to determine the effect of fractionated high-dose total lymphoid irradiation (TLI) on the survival of skin allograft despite major histocompatibility difference. Total lymphoid irradiation is a relatively safe form of radiotherapy, has been used extensively to treat lymphoid malignancies in humans with few side effects. A total of 90 rats, Sprague-Dawley rat as recipient and Wistar rat as donor, were used for the experiment, of which 10 rats were used to determine mixed lymphocyte response (MLR) for antigenic difference and skin allografts was performed in 30 rats given total lymphoid irradiation to assess the immunosuppressive effect of total lymphoid irradiation despite major histocompatibility difference. In addition, the peripheral white blood cell counts and the proportion of lymphocytes was studied in 10 rats given total lymphoid irradiation but no skin graft to determine the effects of bone marrow suppression. The results obtained are summarized as follows. 1. The optimum dose of total lymphoid irradiation was between 1800 rads to 2400 rads. 2. The survival of skin graft on rats given total lymphoid irradiation (23.2 ± 6.0 days) was prolonged about three folds as compared to unirradiated control (8.7 ± 1.3 days). 3. Total lymphoid irradiation resulted in a severe leukopenia with marked lymphopenia, but the count was normal by the end of 3rd week. 4. The study suggests that total lymphoid irradiation is a nonlethal procedure that could be used successfully in animals to transplant allograft across major histocompatibility barriers
An experimental study of the effect of total lymphoid irradiation on the survival of skin allografts
Energy Technology Data Exchange (ETDEWEB)
Park, Charn Il; Han, Man Chung [College of Medicine, Seoul National University, Seoul (Korea, Republic of)
1981-06-15
The study was undertaken to determine the effect of fractionated high-dose total lymphoid irradiation (TLI) on the survival of skin allograft despite major histocompatibility difference. Total lymphoid irradiation is a relatively safe form of radiotherapy, has been used extensively to treat lymphoid malignancies in humans with few side effects. A total of 90 rats, Sprague-Dawley rat as recipient and Wistar rat as donor, were used for the experiment, of which 10 rats were used to determine mixed lymphocyte response (MLR) for antigenic difference and skin allografts was performed in 30 rats given total lymphoid irradiation to assess the immunosuppressive effect of total lymphoid irradiation despite major histocompatibility difference. In addition, the peripheral white blood cell counts and the proportion of lymphocytes was studied in 10 rats given total lymphoid irradiation but no skin graft to determine the effects of bone marrow suppression. The results obtained are summarized as follows. 1. The optimum dose of total lymphoid irradiation was between 1800 rads to 2400 rads. 2. The survival of skin graft on rats given total lymphoid irradiation (23.2 {+-} 6.0 days) was prolonged about three folds as compared to unirradiated control (8.7 {+-} 1.3 days). 3. Total lymphoid irradiation resulted in a severe leukopenia with marked lymphopenia, but the count was normal by the end of 3rd week. 4. The study suggests that total lymphoid irradiation is a nonlethal procedure that could be used successfully in animals to transplant allograft across major histocompatibility barriers.
Resolving uncertainty in chemical speciation determinations
Smith, D. Scott; Adams, Nicholas W. H.; Kramer, James R.
1999-10-01
Speciation determinations involve uncertainty in system definition and experimentation. Identification of appropriate metals and ligands from basic chemical principles, analytical window considerations, types of species and checking for consistency in equilibrium calculations are considered in system definition uncertainty. A systematic approach to system definition limits uncertainty in speciation investigations. Experimental uncertainty is discussed with an example of proton interactions with Suwannee River fulvic acid (SRFA). A Monte Carlo approach was used to estimate uncertainty in experimental data, resulting from the propagation of uncertainties in electrode calibration parameters and experimental data points. Monte Carlo simulations revealed large uncertainties present at high (>9-10) and low (monoprotic ligands. Least-squares fit the data with 21 sites, whereas linear programming fit the data equally well with 9 sites. Multiresponse fitting, involving simultaneous fluorescence and pH measurements, improved model discrimination. Deconvolution of the excitation versus emission fluorescence surface for SRFA establishes a minimum of five sites. Diprotic sites are also required for the five fluorescent sites, and one non-fluorescent monoprotic site was added to accommodate the pH data. Consistent with greater complexity, the multiresponse method had broader confidence limits than the uniresponse methods, but corresponded better with the accepted total carboxylic content for SRFA. Overall there was a 40% standard deviation in total carboxylic content for the multiresponse fitting, versus 10% and 1% for least-squares and linear programming, respectively.
Rambaud, Jérôme; Lidouren, Fanny; Sage, Michaël; Kohlhauer, Matthias; Nadeau, Mathieu; Fortin-Pellerin, Étienne; Micheau, Philippe; Zilberstein, Luca; Mongardon, Nicolas; Ricard, Jean-Damien; Terada, Megumi; Bruneval, Patrick; Berdeaux, Alain; Ghaleh, Bijan; Walti, Hervé; Tissier, Renaud
2018-05-02
Ultrafast cooling by total liquid ventilation (TLV) provides potent cardio- and neuroprotection after experimental cardiac arrest. However, this was evaluated in animals with no initial lung injury, whereas out-of-hospital cardiac arrest is frequently associated with early-onset pneumonia, which may lead to acute respiratory distress syndrome (ARDS). Here, our objective was to determine whether hypothermic TLV could be safe or even beneficial in an aspiration-associated ARDS animal model. ARDS was induced in anesthetized rabbits through a two-hits model including the intra-tracheal administration of a pH = 1 solution mimicking gastric content and subsequent gaseous non-protective ventilation during 90 min (tidal volume [Vt] = 10 ml/kg with positive end-expiration pressure [PEEP] = 0 cmH 2 O). After this initial period, animals either received lung protective gas ventilation (LPV; Vt = 8 ml/kg and PEEP = 5 cmH 2 O) under normothermic conditions, or hypothermic TLV (TLV; Vt = 8 ml/kg and end-expiratory volume = 15 ml/kg). Both strategies were applied for 120 min with a continuous monitoring of respiratory and cardiovascular parameters. Animals were then euthanized for pulmonary histological analyses. Eight rabbits were included in each group. Before randomization, all animals elicited ARDS with arterial oxygen partial pressure over inhaled oxygen fraction ratios (PaO 2 /FiO 2 ) below 100 mmHg, as well as decreased lung compliance. After randomization, body temperature rapidly decreased in TLV versus LPV group (32.6 ± 0.6 vs. 38.2 ± 0.4 °C after 15 min). Static lung compliance and gas exchanges were not significantly different in the TLV versus LPV group (PaO 2 /FiO 2 = 62 ± 4 vs. 52 ± 8 mmHg at the end of the procedure, respectively). Mean arterial pressure and arterial bicarbonates levels were significantly higher in TLV versus LPV. Histological analysis also showed significantly lower inflammation in
Bitew, M. M.; Goodrich, D. C.; Demaria, E.; Heilman, P.; Kautz, M. A.
2017-12-01
Walnut Gulch is a semi-arid environment experimental watershed and Long Term Agro-ecosystem Research (LTAR) site managed by USDA-ARS Southwest Watershed Research Center for which high-resolution long-term hydro-climatic data are available across its 150 km2 drainage area. In this study, we present the analysis of 50 years of continuous hourly rainfall data to evaluate runoff control and generation processes for improving the QA-QC plans of Walnut Gulch to create high-quality data set that is critical for reducing water balance uncertainties. Multiple linear regression models were developed to relate rainfall properties, runoff characteristics and watershed properties. The rainfall properties were summarized to event based total depth, maximum intensity, duration, the location of the storm center with respect to the outlet, and storm size normalized to watershed area. We evaluated the interaction between the runoff and rainfall and runoff as antecedent moisture condition (AMC), antecedent runoff condition (ARC) and, runoff depth and duration for each rainfall events. We summarized each of the watershed properties such as contributing area, slope, shape, channel length, stream density, channel flow area, and percent of the area of retention stock ponds for each of the nested catchments in Walnut Gulch. The evaluation of the model using basic and categorical statistics showed good predictive skill throughout the watersheds. The model produced correlation coefficients ranging from 0.4-0.94, Nash efficiency coefficients up to 0.77, and Kling-Gupta coefficients ranging from 0.4 to 0.98. The model predicted 92% of all runoff generations and 98% of no-runoff across all sub-watersheds in Walnut Gulch. The regression model also indicated good potential to complement the QA-QC procedures in place for Walnut Gulch dataset publications developed over the years since the 1960s through identification of inconsistencies in rainfall and runoff relations.
Yanez Rausell, L.; Malenovsky, Z.; Clevers, J.G.P.W.; Schaepman, M.E.
2014-01-01
We present uncertainties associated with the measurement of coniferous needle-leaf optical properties (OPs) with an integrating sphere using an optimized gap-fraction (GF) correction method, where GF refers to the air gaps appearing between the needles of a measured sample. We used an optically
Energy Technology Data Exchange (ETDEWEB)
Muhrer, G., E-mail: muhrer@lanl.gov [Los Alamos National Laboratory, Los Alamos, 87545 NM (United States); Hartl, M.; Mocko, M.; Tovesson, F.; Daemen, L. [Los Alamos National Laboratory, Los Alamos, 87545 NM (United States)
2012-07-21
In the search for moderator materials encapsulated materials have been discussed, but very little is known regarding the effect of encapsulation on neutron moderation properties. As a first step toward a better understanding, we present the measured total neutron cross-section of water confined in silica microspheres and compare the measured data to the predicted theoretical cross-section.
Total glucosides of paeony prevents juxta-articular bone loss in experimental arthritis
Wei, Chen Chao; You, Fan Tian; Mei, Li Yu; Jian, Sun; Qiang, Chen Yong
2013-01-01
Background Total glucosides of paeony (TGP) is a biologically active compound extracted from Paeony root. TGP has been used in rheumatoid arthritis therapy for many years. However, the mechanism by which TGP prevents bone loss has been less explored. Methods TGP was orally administered for 3?months to New Zealand rabbits with antigen-induced arthritis (AIA). Digital x-ray knee images and bone mineral density (BMD) measurements of the subchondral knee bone were performed before sacrifice. Chon...
LENUS (Irish Health Repository)
McCullagh, Laura
2012-06-05
Background: The National Centre for Pharmacoeconomics, in collaboration with the Health Services Executive, considers the cost effectiveness of all new medicines introduced into Ireland. Health Technology Assessments (HTAs) are conducted in accordance with the existing agreed Irish HTA guidelines. These guidelines do not specify a formal analysis of value of information (VOI). Objective: The aim of this study was to demonstrate the benefits of using VOI analysis in decreasing decision uncertainty and to examine the viability of applying these techniques as part of the formal HTA process for reimbursement purposes within the Irish healthcare system. Method: The evaluation was conducted from the Irish health payer perspective. A lifetime model evaluated the cost effectiveness of rivaroxaban, dabigatran etexilate and enoxaparin sodium for the prophylaxis of venous thromboembolism after total hip replacement. The expected value of perfect information (EVPI) was determined directly from the probabilistic analysis (PSA). Population-level EVPI (PEVPI) was determined by scaling up the EVPI according to the decision incidence. The expected value of perfect parameter information (EVPPI) was calculated for the three model parameter subsets: probabilities, preference weights and direct medical costs. Results: In the base-case analysis, rivaroxaban dominated both dabigatran etexilate and enoxaparin sodium. PSA indicated that rivaroxaban had the highest probability of being the most cost-effective strategy over a threshold range of &U20AC;0-&U20AC;100 000 per QALY. At a threshold of &U20AC;45 000 per QALY, the probability that rivaroxaban was the most cost-effective strategy was 67%. At a threshold of &U20AC;45 000 per QALY, assuming a 10-year decision time horizon, the PEVPI was &U20AC;11.96 million and the direct medical costs subset had the highest EVPPI value (&U20AC;9.00 million at a population level). In order to decrease uncertainty, a more detailed costing
Li, Jing; Chen, Chang Xun; Shen, Yun Hui
2011-05-17
Total glucosides of paeony (TGP), compounds extracted from the roots of Paeonia lactiflora Pall, have been used as an anti-inflammatory drug for the treatment of rheumatoid arthritis (RA) in China. Inflammation plays a critical role in the development of atherosclerotic vascular disease. Risk of cardiovascular diseases is significantly higher in patients with RA than in normal population. It has a great significance to study the effects of TGP on atherosclerosis. To investigate the effects of TGP on atherosclerosis induced by excessive administration of vitamin D and cholesterol in rats and study the mechanisms involved. Atherosclerosis was induced by excessive administration of vitamin D and cholesterol in rats. TGP was intragastrically administered for 15 weeks. The serum concentrations of total cholesterol (TC), triglyceride (TG), low density lipoprotein-cholesterol (LDL-C) and high density lipoprotein-cholesterol (HDL-C) were measured by automatic biochemistry analyzer. Apolipoprotein A1 (ApoA1) and apolipoprotein B (ApoB) were determined by immunoturbidimetry method, tumor necrosis factor-alpha (TNF-alpha), interleukin-6 (IL-6) and C-reactive protein (CRP) were measured by enzyme-linked immunosorbent assay (ELISA) method. The morphological changes of aorta were observed with optical microscopy. Compared to controls, TGP significantly lowered the serum level of TC, TG, LDL-C, ApoB, TNF-alpha, IL-6 and CRP, increased the ratios of HDL-C/LDL-C and ApoA1/ApoB, decreased the intima-media thickness (IMT) of abdominal aortal wall and improved the morphological change of the aorta. TGP may attenuate the development of atherosclerotic disease. The beneficial effects are associated with its lowering blood lipids and inhibiting the expression of inflammatory cytokines. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Abdelgaied, A; Fisher, J; Jennings, L M
2018-02-01
A more robust pre-clinical wear simulation framework is required in order to simulate wider and higher ranges of activities, observed in different patient populations such as younger more active patients. Such a framework will help to understand and address the reported higher failure rates for younger and more active patients (National_Joint_Registry, 2016). The current study has developed and validated a comprehensive combined experimental and computational framework for pre-clinical wear simulation of total knee replacements (TKR). The input mechanical (elastic modulus and Poisson's ratio) and wear parameters of the moderately cross-linked ultra-high molecular weight polyethylene (UHMWPE) bearing material were independently measured from experimental studies under realistic test conditions, similar to the loading conditions found in the total knee replacements. The wear predictions from the computational wear simulation were validated against the direct experimental wear measurements for size 3 Sigma curved total knee replacements (DePuy, UK) in an independent experimental wear simulation study under three different daily activities; walking, deep squat, and stairs ascending kinematic conditions. The measured compressive mechanical properties of the moderately cross-linked UHMWPE material were more than 20% lower than that reported in the literature under tensile test conditions. The pin-on-plate wear coefficient of moderately cross-linked UHMWPE was significantly dependant of the contact stress and the degree of cross-shear at the articulating surfaces. The computational wear predictions for the TKR from the current framework were consistent and in a good agreement with the independent full TKR experimental wear simulation measurements, with 0.94 coefficient of determination of the framework. In addition, the comprehensive combined experimental and computational framework was able to explain the complex experimental wear trends from the three different daily
Directory of Open Access Journals (Sweden)
V. M. Mashkov
2012-01-01
Full Text Available Objective: to research the specific features of regenerative processes of bone tissue around implants after one-stage bilateral total hip replacement in experiment. Material and methods: 27 total hip replacement operations have been performed in 18 rabbits of breed "chinchilla" to which bipolar femoral endoprosthesis made of titanic alloy PT-38, one type-size, with friction pair metal-on-metal and neck-shaft angle 165 degrees have been implanted: total unilateral hip replacement operations have been performed in 9 animals (control group, one-stage bilateral total hip replacement operations have been performed in 9 animals (experimental group. During research they have been on radiological and clinical checking-up. After the experiment the animals had histological tests of the tissues around endoprosthesis components. Results and conclusions: After one-stage bilateral total hip replacement in early terms of research more expressed changes of bone tissue in the form of its thinning and decompaction were found around implants. One-stage bilateral total hip replacement did not essentially influence on the speed of osteogenesis around endoprothesis components in comparison with unilateral total hip replacement, so in late terms of observation in both groups the fixing of endoprothesis components did not differ.
Energy Technology Data Exchange (ETDEWEB)
Marzouk, Youssef [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)
2016-08-31
Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesian inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.
Total glucosides of paeony prevents juxta-articular bone loss in experimental arthritis.
Wei, Chen Chao; You, Fan Tian; Mei, Li Yu; Jian, Sun; Qiang, Chen Yong
2013-07-21
Total glucosides of paeony (TGP) is a biologically active compound extracted from Paeony root. TGP has been used in rheumatoid arthritis therapy for many years. However, the mechanism by which TGP prevents bone loss has been less explored. TGP was orally administered for 3 months to New Zealand rabbits with antigen-induced arthritis (AIA). Digital x-ray knee images and bone mineral density (BMD) measurements of the subchondral knee bone were performed before sacrifice. Chondrocytes were observed using transmission electron microscopy (TEM). Histological analysis and mRNA expression of receptor activator of nuclear factor-B ligand (RANKL) and osteoprotegerin (OPG) were evaluated in joint tissues. The BMD value in TGP rabbits was significantly higher compared with that seen in the AIA model rabbits. In addition, the subchondral bone plate was almost completely preserved by TGP treatment, while there was a decrease in bone plate integrity in AIA rabbits. There was less damage to the chondrocytes of the TGP treated group. Immunohistochemical examination of the TGP group showed that a higher percentage of TGP treated chondrocytes expressed OPG as compared to the chondrocytes isolated from AIA treated animals. In contrast, RANKL expression was significantly decreased in the TGP treated group compared to the AIA group. In support of the immunohistochemistry data, the expression of RANKL mRNA was decreased and OPG mRNA expression was enhanced in the TGP group when compared to that of the AIA model group. These results reveal that TGP suppresses juxta-articular osteoporosis and prevents subchondral bone loss. The decreased RANKL and increased OPG expression seen in TGP treated animals could explain how administration of TGP maintains higher BMD.
Arechvo, Irina; Bornitz, Matthias; Lasurashvili, Nikoloz; Zahnert, Thomas; Beleites, Thomas
2012-01-01
New flexible total ossicular prostheses with an integrated microjoint can compensate for large static displacements in the reconstructed ossicular chain. When properly designed, they can mimic the function of the joints of the intact chain and ensure good vibration transfer in both straight and bent conditions. Prosthesis dislocations and extrusions are frequently observed after middle ear surgery. They are mainly related to the altered distance between the coupling points because of large static eardrum displacements. The new prostheses consist of 2 titanium shafts, which are incorporated into a silicone body. The sound transfer function and stapes footplate displacement at static loads were evaluated in human temporal bones after ossicular reconstruction using prostheses with 2 different silicones with different hardness values. The stiffness and bending characteristics of the prostheses were investigated with a quasi-static load. The sound transfer properties of the middle ears with the prostheses inserted under uncompressed conditions were comparable with those of ears with intact ossicular chains. The implant with the soft silicone had improved acoustic transfer characteristics over the implant with the hard silicone in a compressed state. In the quasi-static experiments, the minimum medial footplate displacement was found with the same implant. The bending characteristics depended on the silicone stiffness and correlated closely with the point and angle of the load incidence. The titanium prostheses with a resilient joint that were investigated in this study had good sound transfer characteristics under optimal conditions as well as in a compressed state. As a result of joint bending, the implants compensate for the small changes in length of the ossicular chain that occur under varying middle ear pressure. The implants require a stable support at the stapes footplate to function properly.
Energy Technology Data Exchange (ETDEWEB)
Santana, L.V.; Sarkis, J.E.S.; Ulrich, J.C.; Hortellani, M.A., E-mail: santana-luciana@ig.com.br, E-mail: jesarkis@ipen.br, E-mail: jculrich@ipen.br, E-mail: mahortel@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2013-07-01
This study presents the uncertainty estimate for characterization, study of homogeneity and stability study obtained in the preparation of a reference material for the determination of total mercury in fish fresh muscle tissue for proficiency testing. The test results for stability were obtained by linear regression and to homogeneity study was obtained by ANOVA-one way showed that the material is homogeneous and stable. The value of total mercury concentration with expanded uncertainty for the material was 0,294 ± 0,089 μg g{sup -}. (author)
Deterministic uncertainty analysis
International Nuclear Information System (INIS)
Worley, B.A.
1987-01-01
Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig
International Nuclear Information System (INIS)
Thomas, R.E.
1982-03-01
An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software
Verification of uncertainty budgets
DEFF Research Database (Denmark)
Heydorn, Kaj; Madsen, B.S.
2005-01-01
, and therefore it is essential that the applicability of the overall uncertainty budget to actual measurement results be verified on the basis of current experimental data. This should be carried out by replicate analysis of samples taken in accordance with the definition of the measurand, but representing...... the full range of matrices and concentrations for which the budget is assumed to be valid. In this way the assumptions made in the uncertainty budget can be experimentally verified, both as regards sources of variability that are assumed negligible, and dominant uncertainty components. Agreement between...
DEFF Research Database (Denmark)
Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A.
2017-01-01
The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems......’ traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined...... experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile...
Qin, Ying; Tian, Ya-ping
2011-01-01
Introduction We explored the protective effects of total glucosides of paeony (TGP) and the underlying mechanisms in carbon tetrachloride (CCl4)-induced experimental liver injury in mice. Material and methods Chronic liver damage was induced by intraperitoneal injection of CCl4 (0.5 µl/g) three times per week for 8 weeks. Mice also received 25, 50 or 100 mg/kg TGP. Liver sections were stained with haematoxylin/eosin. Serum amino transferases, lipid peroxidation and tumour necrosis factor-α (TNF-α) levels were determined using commercial assays. Quantitative real-time polymerase chain reaction was used to determine the changes in hepatic TNF-α, COX-2, iNOS and HO-1 expression. Protein levels of nitric oxide synthase, cyclooxygenase-2, haem oxygenase-1 and cytochrome P450 2E1 were determined by western blotting. Results Histological results showed that TGP improved the CCl4-induced changes in liver structure and alleviated lobular necrosis. The increases in serum protein and hepatic mRNA expression of TNF-α induced by CCl4 treatment were suppressed by TGP. Total glucosides of paeony also attenuated the increase the expression in iNOS and CYP2E1 but augmented the increase in HO-1.The mRNA and protein expression levels of inducible HO-1 increased significantly after CCl4 treatment. Conclusions Total glucosides of paeony protects hepatocytes from oxidative damage induced by CCl4. Total glucosides of paeony may achieve these effects by enhancing HO-1 expression and inhibiting the expression of proinflammatory mediators. PMID:22291795
Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A; Ontiveros, Sinué; Tosello, Guido
2017-05-16
The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems' traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile techniques, particularly when measuring miniaturized components with complex 3D geometries and their inability to measure inner parts. To validate the presented method, the most accepted standard currently available for CT sensors, the Verein Deutscher Ingenieure/Verband Deutscher Elektrotechniker (VDI/VDE) guideline 2630-2.1 is applied. Considering the high number of influence factors in CT and their impact on the measuring result, two different techniques for surface extraction are also considered to obtain a realistic determination of the influence of data processing on uncertainty. The uncertainty assessment of a workpiece used for micro mechanical material testing is firstly used to confirm the method, due to its feasible calibration by an optical CMS. Secondly, the measurement of a miniaturized dental file with 3D complex geometry is carried out. The estimated uncertainties are eventually compared with the component's calibration and the micro manufacturing tolerances to demonstrate the suitability of the presented CT calibration procedure. The 2U/T ratios resulting from the
Energy Technology Data Exchange (ETDEWEB)
Giust, Flavio [Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Ecole Polytechnique Federale de Lausanne, CH-1015 Lausanne (Switzerland); Nordostschweizerische Kraftwerke AG, Parkstrasse 23, CH-5401 Baden (Switzerland); Grimm, Peter; Jatuff, Fabian [Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Chawla, Rakesh [Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Ecole Polytechnique Federale de Lausanne, CH-1015 Lausanne (Switzerland)
2008-07-01
Total fission rate measurements have been performed on full size BWR fuel assemblies of type SVEA-96+ in the zero power reactor PROTEUS at the Paul Scherrer Institute. This work presents comparisons of reconstructed 2D pin fission rates in two configurations, I-1A and I-2A. Both configurations contain, in the central test zone, an array of 3x3 SVEA-96+ fuel elements moderated with light water at 20 deg. C. In configuration I-2A, an L-shaped hafnium control blade (half of a real cruciform blade) is inserted adjacent to the NW corner of the central fuel element. To minimize the impact of the surroundings, all measurements were done in fuel pins belonging to the central assembly. The 3x3 experimental configuration was modeled using the core monitoring and design tools that are applied at the Leibstadt Nuclear Power Plant (KKL). These are the 2D transport code HELIOS, used for the cross-section generation, and the 3D, 2-group nodal diffusion code PRESTO-2. The exterior is represented, in the axial and radial directions, by 2-group albedos calculated at the test zone boundary using a full-core 3D MCNPX model. The calculated-to-experimental (C/E) ratios of the total fission rates have a standard deviation of 1.3% in configuration I-1A (uncontrolled) and 3.2% in configuration I-2A (controlled). Sensitivity cases are analyzed to show the impact of certain parameters on the calculated fission rate distribution and reactivity. It is shown that the relative pin fission rate is only weakly dependent on these parameters. In cases without a control blade, the pin power reconstruction methodology delivers the same level of accuracy as 2D transport calculations. On the other hand, significant deviations, that are inherent to the use of reflected geometry in the lattice calculations, are observed in cases when the control blade is inserted. (authors)
International Nuclear Information System (INIS)
Leray, O.; Hudelot, J.P.; Doederlein, C.; Vaglio-Gaudard, C.; Antony, M.; Santamarina, A.; Bernard, D.
2012-01-01
The new European material testing Jules Horowitz Reactor (JHR), currently under construction in Cadarache center (CEA France), will use LEU (20% enrichment in 235 U) fuels (U 3 Si 2 for the start up and UMoAl in the future) which are quite different from the industrial oxide fuel, for which an extensive neutronics experimental validation database has been established. The HORUS3D/N neutronics calculation scheme, used for the design and safety studies of the JHR, is being developed within the framework of a rigorous verification-numerical validation-experimental validation methodology. In this framework, the experimental VALMONT (Validation of Aluminium Molybdenum uranium fuel for Neutronics) program has been performed in the MINERVE facility of CEA Cadarache (France), in order to qualify the capability of HORUS3D/N to accurately calculate the reactivity of the JHR reactor. The MINERVE facility using the oscillation technique provides accurate measurements of reactivity effect of samples. The VALMONT program includes oscillations of samples of UAl ∞ /Al and UMo/Al with enrichments ranging from 0.2% to 20% and Uranium densities from 2.2 to 8 g/cm 3 . The geometry of the samples and the pitch of the experimental lattice ensure maximum representativeness with the neutron spectrum expected for JHR. By comparing the effect of the sample with the one of a known fuel specimen, the reactivity effect can be measured in absolute terms and be compared to computational results. Special attention was paid to the rigorous determination and reduction of the experimental uncertainties. The calculational analysis of the VALMONT results was performed with the French deterministic code APOLLO2. A comparison of the impact of the different calculation methods, data libraries and energy meshes that were tested is presented. The interpretation of the VALMONT experimental program allowed the experimental validation of JHR fuel UMoAl8 (with an enrichment of 19.75% 235 U) by the Minerve
Lindley, Dennis V
2013-01-01
Praise for the First Edition ""...a reference for everyone who is interested in knowing and handling uncertainty.""-Journal of Applied Statistics The critically acclaimed First Edition of Understanding Uncertainty provided a study of uncertainty addressed to scholars in all fields, showing that uncertainty could be measured by probability, and that probability obeyed three basic rules that enabled uncertainty to be handled sensibly in everyday life. These ideas were extended to embrace the scientific method and to show how decisions, containing an uncertain element, could be rationally made.
Directory of Open Access Journals (Sweden)
Laura E. Dennis
2016-12-01
Full Text Available Experimental studies have shown that sleep restriction (SR and total sleep deprivation (TSD produce increased caloric intake, greater fat consumption, and increased late-night eating. However, whether individuals show similar energy intake responses to both SR and TSD remains unknown. A total of N = 66 healthy adults (aged 21–50 years, 48.5% women, 72.7% African American participated in a within-subjects laboratory protocol to compare daily and late-night intake between one night of SR (4 h time in bed, 04:00–08:00 and one night of TSD (0 h time in bed conditions. We also examined intake responses during subsequent recovery from SR or TSD and investigated gender differences. Caloric and macronutrient intake during the day following SR and TSD were moderately to substantially consistent within individuals (Intraclass Correlation Coefficients: 0.34–0.75. During the late-night period of SR (22:00–04:00 and TSD (22:00–06:00, such consistency was slight to moderate, and participants consumed a greater percentage of calories from protein (p = 0.01 and saturated fat (p = 0.02 during SR, despite comparable caloric intake (p = 0.12. Similarly, participants consumed a greater percentage of calories from saturated fat during the day following SR than TSD (p = 0.03. Participants also consumed a greater percentage of calories from protein during recovery after TSD (p < 0.001. Caloric intake was greater in men during late-night hours and the day following sleep loss. This is the first evidence of phenotypic trait-like stability and differential vulnerability of energy balance responses to two commonly experienced types of sleep loss: our findings open the door for biomarker discovery and countermeasure development to predict and mitigate this critical health-related vulnerability.
Dennis, Laura E; Spaeth, Andrea M; Goel, Namni
2016-12-19
Experimental studies have shown that sleep restriction (SR) and total sleep deprivation (TSD) produce increased caloric intake, greater fat consumption, and increased late-night eating. However, whether individuals show similar energy intake responses to both SR and TSD remains unknown. A total of N = 66 healthy adults (aged 21-50 years, 48.5% women, 72.7% African American) participated in a within-subjects laboratory protocol to compare daily and late-night intake between one night of SR (4 h time in bed, 04:00-08:00) and one night of TSD (0 h time in bed) conditions. We also examined intake responses during subsequent recovery from SR or TSD and investigated gender differences. Caloric and macronutrient intake during the day following SR and TSD were moderately to substantially consistent within individuals (Intraclass Correlation Coefficients: 0.34-0.75). During the late-night period of SR (22:00-04:00) and TSD (22:00-06:00), such consistency was slight to moderate, and participants consumed a greater percentage of calories from protein ( p = 0.01) and saturated fat ( p = 0.02) during SR, despite comparable caloric intake ( p = 0.12). Similarly, participants consumed a greater percentage of calories from saturated fat during the day following SR than TSD ( p = 0.03). Participants also consumed a greater percentage of calories from protein during recovery after TSD ( p sleep loss. This is the first evidence of phenotypic trait-like stability and differential vulnerability of energy balance responses to two commonly experienced types of sleep loss: our findings open the door for biomarker discovery and countermeasure development to predict and mitigate this critical health-related vulnerability.
Lo Porto, A.; De Girolamo, A. M.; Santese, G.
2012-04-01
In this presentation, the experience gained in the first experimental use in the UE (as far as we know) of the concept and methodology of the "Total Maximum Daily Load" (TMDL) is reported. The TMDL is an instrument required in the Clean Water Act in U.S.A for the management of water bodies classified impaired. The TMDL calculates the maximum amount of a pollutant that a waterbody can receive and still safely meet water quality standards. It permits to establish a scientifically-based strategy on the regulation of the emission loads control according to the characteristic of the watershed/basin. The implementation of the TMDL is a process analogous to the Programmes of Measures required by the WFD, the main difference being the analysis of the linkage between loads of different sources and the water quality of water bodies. The TMDL calculation was used in this study for the Candelaro River, a temporary Italian river, classified impaired in the first steps of the implementation of the WFD. A specific approach based on the "Load Duration Curves" was adopted for the calculation of nutrient TMDLs due to the more robust approach specific for rivers featuring large changes in river flow compared to the classic approach based on average long term flow conditions. This methodology permits to establish the maximum allowable loads across to the different flow conditions of a river. This methodology enabled: to evaluate the allowable loading of a water body; to identify the sources and estimate their loads; to estimate the total loading that the water bodies can receives meeting the water quality standards established; to link the effects of point and diffuse sources on the water quality status and finally to individuate the reduction necessary for each type of sources. The loads reductions were calculated for nitrate, total phosphorus and ammonia. The simulated measures showed a remarkable ability to reduce the pollutants for the Candelaro River. The use of the Soil and
Koch, Michael
Measurement uncertainty is one of the key issues in quality assurance. It became increasingly important for analytical chemistry laboratories with the accreditation to ISO/IEC 17025. The uncertainty of a measurement is the most important criterion for the decision whether a measurement result is fit for purpose. It also delivers help for the decision whether a specification limit is exceeded or not. Estimation of measurement uncertainty often is not trivial. Several strategies have been developed for this purpose that will shortly be described in this chapter. In addition the different possibilities to take into account the uncertainty in compliance assessment are explained.
International Nuclear Information System (INIS)
Cho, Soo Yong; Park, Chan Woo
2004-01-01
Uncertainties generated from the individual measured variables have an influence on the uncertainty of the experimental result through a data reduction equation. In this study, a performance test of a single stage axial type turbine is conducted, and total-to-total efficiencies are measured at the various off-design points in the low pressure and cold state. Based on an experimental apparatus, a data reduction equation for turbine efficiency is formulated and six measured variables are selected. Codes are written to calculate the efficiency, the uncertainty of the efficiency, and the sensitivity of the efficiency uncertainty by each of the measured quantities. The influence of each measured variable on the experimental result is figured out. Results show that the largest Uncertainty Magnification Factor (UMF) value is obtained by the inlet total pressure among the six measured variables, and its value is always greater than one. The UMF values of the inlet total temperature, the torque, and the RPM are always one. The Uncertainty Percentage Contribution (UPC) of the RPM shows the lowest influence on the uncertainty of the turbine efficiency, but the UPC of the torque has the largest influence to the result among the measured variables. These results are applied to find the correct direction for meeting an uncertainty requirement of the experimental result in the planning or development phase of experiment, and also to offer ideas for preparing a measurement system in the planning phase
Uncertainty in hydrological signatures
McMillan, Hilary; Westerberg, Ida
2015-04-01
magnitude and bias, and to test how uncertainty depended on the density of the raingauge network and flow gauging station characteristics. The uncertainties were sometimes large (i.e. typical intervals of ±10-40% relative uncertainty) and highly variable between signatures. Uncertainty in the mean discharge was around ±10% for both catchments, while signatures describing the flow variability had much higher uncertainties in the Mahurangi where there was a fast rainfall-runoff response and greater high-flow rating uncertainty. Event and total runoff ratios had uncertainties from ±10% to ±15% depending on the number of rain gauges used; precipitation uncertainty was related to interpolation rather than point uncertainty. Uncertainty distributions in these signatures were skewed, and meant that differences in signature values between these catchments were often not significant. We hope that this study encourages others to use signatures in a way that is robust to data uncertainty.
DEFF Research Database (Denmark)
Diky, Vladimir; Chirico, Robert D.; Muzny, Chris
ThermoData Engine (TDE, NIST Standard Reference Databases 103a and 103b) is the first product that implements the concept of Dynamic Data Evaluation in the fields of thermophysics and thermochemistry, which includes maintaining the comprehensive and up-to-date database of experimentally measured ...... uncertainties, curve deviations, and inadequacies of the models. Uncertainty analysis shows relative contributions to the total uncertainty from each component and pair of components....
Liu, Baoding
2015-01-01
When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, c...
Directory of Open Access Journals (Sweden)
Ruy J. Cruz Junior
2009-04-01
ômago, duodeno, pancreas, fígado, baço, intestino delgado, e colon. Uma prótese vascular foi interposta entre a veia cava infra e supra-hepática. Efeitos hemodinâmicos foram avaliados por meio de um cateter de Swan-Ganz catheter e Doppler ultrassônico. As variáveis dependentes de oxigênio, e o metabolismo da glicose e lactato foram avaliados durante todo o experimento. RESULTADOS: A evisceração abdominal esteve associada a uma redução significativa do débito cardiaco e da pressão arterial média (57% and 14%, respectivamente. Duas horas após a reconstrução vascular da veia cava inferior observou-se uma redução significativa do pH e da glicose arterial. O consumo de oxigênio se manteve inalterado nas primeiras duas horas do experimento, com um significativo aumento dos níveis séricos de lactato (1.4±0.3 vs. 7.6±0.4, p<0.05. Três animais morreram antes de 180 minutos após a reperfusão. CONCLUSÃO: A evisceração abdominal total esteve associada com graves repercussões hemodinâmicas e metabólicas sistêmicas. Estas graves alterações hemodinâmicas estam associadas, provavelmente, a combinação de vários fatores incluindo: acidose metabólica, hiperlactemia, hipoglicemia e redução do volume de sangue circulante. A cuidadosa monitorização hemodinâmica e metabólica deve ser realizada durante o MVTx experimental com a finalidade de promover um aumento das taxas de sobrevida deste complexo procedimento.
Embracing uncertainty in applied ecology.
Milner-Gulland, E J; Shea, K
2017-12-01
Applied ecologists often face uncertainty that hinders effective decision-making.Common traps that may catch the unwary are: ignoring uncertainty, acknowledging uncertainty but ploughing on, focussing on trivial uncertainties, believing your models, and unclear objectives.We integrate research insights and examples from a wide range of applied ecological fields to illustrate advances that are generally underused, but could facilitate ecologists' ability to plan and execute research to support management.Recommended approaches to avoid uncertainty traps are: embracing models, using decision theory, using models more effectively, thinking experimentally, and being realistic about uncertainty. Synthesis and applications . Applied ecologists can become more effective at informing management by using approaches that explicitly take account of uncertainty.
Directory of Open Access Journals (Sweden)
Raquel Dalla Costa
2007-07-01
Full Text Available The dental wastewater can contribute to the total daily mercury load on the environment. Factorial design of experiments is useful to analyze factors that influence this solubility. The aim of the present study was to design experiments to examine the effects ofoperational variables, humic acid, temperature, pH and contact time that may affect the solubility of total mercury as dental amalgam residue in reduction process. Based on the factorial design of experiments, the humic acid concentration was the most significant factor in this process, followed by other factors. The parameters affecting the solubility of total mercury showed that when the [HA], T and CT increases and pH decreases there is an important increase of total mercury concentration in process. For the tested conditions, thehigh total mercury concentration was obtained using the humic acid concentration = 1.0 g L-1, temperature = 35oC, pH = 4.0 and contact time = 10 days.O esgoto odontológico pode contribuir na carga total de mercúrio noambiente. O estudo do planejamento experimental é útil para analisar os fatores que influenciam nesta solubilidade. O objetivo deste trabalho foi realizar um planejamento experimental para analisar os efeitos das variáveis operacionais, ácido húmico, temperatura,pH e tempo de contato, que podem afetar a solubilidade do mercúrio total como amálgama odontológico em um processo de redução. Baseado no planejamento experimental, a concentração de ácido húmico foi o fator mais significativo no processo, seguido dos demais fatores. Os parâmetros que afetam a solubilidade do mercúrio total mostram que quando a [AH], T e TC aumentam e o pH diminui há um aumento significativo na concentração de mercúrio total no processo. A maior concentração de mercúrio total foi obtido nas condições de concentração de ácido húmico = 1,0 g L-1, temperatura = 35oC, pH = 4,0 e tempo de contato = 10 dias.
The Uncertainty of Measurement Results
Energy Technology Data Exchange (ETDEWEB)
Ambrus, A. [Hungarian Food Safety Office, Budapest (Hungary)
2009-07-15
Factors affecting the uncertainty of measurement are explained, basic statistical formulae given, and the theoretical concept explained in the context of pesticide formulation analysis. Practical guidance is provided on how to determine individual uncertainty components within an analytical procedure. An extended and comprehensive table containing the relevant mathematical/statistical expressions elucidates the relevant underlying principles. Appendix I provides a practical elaborated example on measurement uncertainty estimation, above all utilizing experimental repeatability and reproducibility laboratory data. (author)
Directory of Open Access Journals (Sweden)
Bruno Watanabe Minto
2008-02-01
Full Text Available A prótese total da articulação coxofemoral representa uma das técnicas mais aceitas, nos Estados Unidos e na Europa, para o tratamento cirúrgico da displasia coxofemoral severa em cães. Entretanto, ainda é pouco difundida e estudada no Brasil. No presente trabalho, foi utilizada prótese confeccionada no Brasil, com o objetivo de avaliar a sua aplicação, evolução pós-operatória e complicações associadas. Foram utilizados dez cães, sadios e adultos. Todos foram submetidos ao procedimento de prótese total cimentada da articulação esquerda utilizando-se um componente femoral de cromo-cobalto, cabeça fixa, e uma cúpula acetabular de polietileno de alta densidade. As avaliações clínicas e radiográficas foram realizadas no pré-operatório e aos 30, 60, 90, 120 e 150 dias após a intervenção cirúrgica. A prótese total utilizada proporcionou um bom resultado funcional ao membro operado, em 80% dos animais. As principais complicações relacionadas foram a luxação protética e a soltura do componente acetabular.Total hip prosthesis is one of the most accepted methods used in the United States and Europe for the treatment of severe hip dysplasia in dogs. However, there are few studies with the technique and the procedure is still not well established in Brazil. A prosthesis made in Brazil was used. The purpose of this study was to evaluate its application and determine the postoperative finds and complications. Ten mature healthy dogs were used. A fixed-head cemented total hip prosthesis was applied on the left leg of the dogs. A chrome-cobalt femoral stem and high density polyetylene acetabular cup was used. Clinical and radiographic evaluations were performed before surgery and again at 30, 60, 90, 120 e 150 days post operation. Proper weight bearing was noticed on the operated limb in the majority of the dogs. The main complications were the dislocation and loosening of the acetabular cup.
Duerdoth, Ian
2009-01-01
The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…
DEFF Research Database (Denmark)
Heydorn, Kaj; Anglov, Thomas
2002-01-01
Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...
Bitter, T.; Khan, I.; Marriott, T.; Schreurs, B.W.; Verdonschot, Nicolaas Jacobus Joseph; Janssen, D.
2016-01-01
The modular taper junction in total hip replacements has been implicated as a possible source of wear. The finite-element (FE) method can be used to study the wear potential at the taper junction. For such simulations it is important to implement representative contact parameters, in order to
Schilling, Chris; Petrie, Dennis; Dowsey, Michelle M; Choong, Peter F; Clarke, Philip
2017-12-01
Many treatments are evaluated using quasi-experimental pre-post studies susceptible to regression to the mean (RTM). Ignoring RTM could bias the economic evaluation. We investigated this issue using the contemporary example of total knee replacement (TKR), a common treatment for end-stage osteoarthritis of the knee. Data (n = 4796) were obtained from the Osteoarthritis Initiative database, a longitudinal observational study of osteoarthritis. TKR patients (n = 184) were matched to non-TKR patients, using propensity score matching on the predicted hazard of TKR and exact matching on osteoarthritis severity and health-related quality of life (HrQoL). The economic evaluation using the matched control group was compared to the standard method of using the pre-surgery score as the control. Matched controls were identified for 56% of the primary TKRs. The matched control HrQoL trajectory showed evidence of RTM accounting for a third of the estimated QALY gains from surgery using the pre-surgery HrQoL as the control. Incorporating RTM into the economic evaluation significantly reduced the estimated cost effectiveness of TKR and increased the uncertainty. A generalized ICER bias correction factor was derived to account for RTM in cost-effectiveness analysis. RTM should be considered in economic evaluations based on quasi-experimental pre-post studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Spriggs, Gregory D.; Nelson, George W.
1976-01-01
An experiment was performed to measure the total isothermal (or bath) feedback coefficient of reactivity for the University of Arizona TRIGA Research Reactor (UARR). It was found that the bath coefficient was temperature-dependent and may be represented by the expression α iso .2634 x 10 -2 + .3428 x 10 -3 T - 2.471 x 10 -5 T 2 + 3.476 x 10 -7 T 3 for the temperature range of 7 C to 43 C. (author)
International Nuclear Information System (INIS)
Gonzalez del Pino, J.; Benito, M.; Randolph, M.A.; Weiland, A.J.
1990-01-01
At the present time, the toxic side effects of recipient immunosuppression cannot be justified for human non-vital organ transplantation. Total body irradiation has proven effective in ablating various bone-marrow-derived and endothelial immunocompetent cellular populations, which are responsible for immune rejection against donor tissues. Irradiation at a dose of 10 Gy was given to donor rats six days prior to heterotopic transplantation of vascularized bone allografts to host animals. Another group of recipient rats also received a short-term (sixth to fourteenth day after grafting), low dose of cyclosporine. Total body irradiation was able merely to delay rejection of grafts across a strong histocompatibility barrier for one to two weeks, when compared to nonirradiated allografts. The combination of donor irradiation plus cyclosporine did not delay the immune response, and the rejection score was similar to that observed for control allografts. Consequently, allograft viability was quickly impaired, leading to irreversible bone damage. This study suggest that 10 Gy of donor total body irradiation delivered six days prior to grafting cannot circumvent the immune rejection in a vascularized allograft of bone across a strong histocompatibility barrier
Uncertainty in relative energy resolution measurements
International Nuclear Information System (INIS)
Volkovitsky, P.; Yen, J.; Cumberland, L.
2007-01-01
We suggest a new method for the determination of the detector relative energy resolution and its uncertainty based on spline approximation of experimental spectra and a statistical bootstrapping procedure. The proposed method is applied to the spectra obtained with NaI(Tl) scintillating detectors and 137 Cs sources. The spectrum histogram with background subtracted channel-by-channel is modeled by cubic spline approximation. The relative energy resolution (which is also known as pulse height resolution and energy resolution), defined as the full-width at half-maximum (FWHM) divided by the value of peak centroid, is calculated using the intercepts of the spline curve with the line of the half peak height. The value of the peak height is determined as the point where the value of the derivative goes to zero. The residuals, which are normalized over the square root of counts in a given bin (y-coordinate), obey the standard Gaussian distribution. The values of these residuals are randomly re-assigned to a different set of y-coordinates where a new 'pseudo-experimental' data set is obtained after 'de-normalization' of the old values. For this new data set a new spline approximation is found and the whole procedure is repeated several hundred times, until the standard deviation of relative energy resolution becomes stabilized. The standard deviation of relative energy resolutions calculated for each 'pseudo-experimental' data set (bootstrap uncertainty) is considered to be an estimate for relative energy resolution uncertainty. It is also shown that the relative bootstrap uncertainty is proportional to, and generally only two to three times bigger than, 1/√(N tot ), which is the relative statistical count uncertainty (N tot is the total number of counts under the peak). The newly suggested method is also applicable to other radiation and particle detectors, not only for relative energy resolution, but also for any of the other parameters in a measured spectrum, like
DEFF Research Database (Denmark)
Nguyen, Daniel Xuyen
This paper presents a model of trade that explains why firms wait to export and why many exporters fail. Firms face uncertain demands that are only realized after the firm enters the destination. The model retools the timing of uncertainty resolution found in productivity heterogeneity models....... This retooling addresses several shortcomings. First, the imperfect correlation of demands reconciles the sales variation observed in and across destinations. Second, since demands for the firm's output are correlated across destinations, a firm can use previously realized demands to forecast unknown demands...... in untested destinations. The option to forecast demands causes firms to delay exporting in order to gather more information about foreign demand. Third, since uncertainty is resolved after entry, many firms enter a destination and then exit after learning that they cannot profit. This prediction reconciles...
Zhang, Kexia; Zhang, Yuanyuan; Zhang, Meiyu; Gu, Liqiang; Liu, Ziying; Jia, Jingming; Chen, Xiaohui
2016-09-15
The total flavonoids from Persimmon leaves (PLF), extracted from the leaves of Diospyros kaki L. Dispryosl and Ebenaceae, is reported to possess many beneficial health effects. However, the oral bioavailability of PLF is relatively low due to its poor solubility. In the present study, the phospholipid complexes of total flavonoids from Persimmon leaves (PLF-PC) was prepared to enhance the oral bioavailability of PLF and to evaluate its antiatherosclerotic properties in atherosclerosis rats in comparison to PLF. A HPLC-MS method was developed and validated for the determination of quercetin and kaempferol in rats plasma to assess the oral bioavailability of PLF-PC. The effect of PLF (50mg/kg/d) and PLF-PC (equivalent to PLF 50mg/kg/d) on atherosclerosis rats induced by excessive administration of vitamin D (600,000IU/kg) and cholesterol (0.5g/kg/d) was assessed after orally administered for 4 weeks. The relative bioavailabilities of quercetin and kaempferol in PLF-PC relative to PLF were 242% and 337%, respectively. The levels of total cholesterol (TC), triglyceride (TG), low-density lipoprotein-cholesterol (LDL-C), high-density lipoprotein-cholesterol (HDL-C), apolipoprotein A1 (ApoA1) and apolipoprotein B (ApoB) in serum were measured by an automatic biochemistry analyzer. The morphological changes of aorta were observed with optical microscopy. According to the levels of biochemical parameters in serum and the morphological changes of aorta, PLF-PC showed better therapeutic efficacy compared to PLF. Thus, PLF-PC holds a promising potential for increasing the oral bioavailability of PLF. Moreover, PLF-PC exerts better therapeutic potential in the treatment of atherosclerotic disease than PLF. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Uncertainty quantification in resonance absorption
International Nuclear Information System (INIS)
Williams, M.M.R.
2012-01-01
We assess the uncertainty in the resonance escape probability due to uncertainty in the neutron and radiation line widths for the first 21 resonances in 232 Th as given by . Simulation, quadrature and polynomial chaos methods are used and the resonance data are assumed to obey a beta distribution. We find the uncertainty in the total resonance escape probability to be the equivalent, in reactivity, of 75–130 pcm. Also shown are pdfs of the resonance escape probability for each resonance and the variation of the uncertainty with temperature. The viability of the polynomial chaos expansion method is clearly demonstrated.
Uncertainty, joint uncertainty, and the quantum uncertainty principle
International Nuclear Information System (INIS)
Narasimhachar, Varun; Poostindouz, Alireza; Gour, Gilad
2016-01-01
Historically, the element of uncertainty in quantum mechanics has been expressed through mathematical identities called uncertainty relations, a great many of which continue to be discovered. These relations use diverse measures to quantify uncertainty (and joint uncertainty). In this paper we use operational information-theoretic principles to identify the common essence of all such measures, thereby defining measure-independent notions of uncertainty and joint uncertainty. We find that most existing entropic uncertainty relations use measures of joint uncertainty that yield themselves to a small class of operational interpretations. Our notion relaxes this restriction, revealing previously unexplored joint uncertainty measures. To illustrate the utility of our formalism, we derive an uncertainty relation based on one such new measure. We also use our formalism to gain insight into the conditions under which measure-independent uncertainty relations can be found. (paper)
International Nuclear Information System (INIS)
Elay, A.G.
1978-01-01
A method to compare calculated and experimental neutron attenuation coefficients (chi) when samples are o, different geometries but the same material is proposed. The best Σ (total removal cross section) is determined by using the fact that the logarithm of the attenuation coefficient varies linearly with respect to Σ i.e. lg chi = + asub(s) Σ, where asub(s) is a parameter that characterises all the geometrical experimental conditions of the neutron source, the sample and the relative source-to-sample geometry. In order to increase the precision, samples of different geometries but the same material were used. Values of chi are determined experimentally and asub(s) calculated for these geometries. The graph of lg chi as a function of asub(s) together with a simple fit to a straight line is sufficient to determine Σ (the slope of the line). (T.G.)
Zou, Xiao-Duan; Li, Jian-Yang; Clark, Beth Ellen; Golish, Dathon
2018-01-01
The OSIRIS-REx spacecraft, launched in September, 2016, will study the asteroid Bennu and return a sample from its surface to Earth in 2023. Bennu is a near-Earth carbonaceous asteroid which will provide insight into the formation and evolution of the solar system. OSIRIS-REx will first approach Bennu in August 2018 and will study the asteroid for approximately two years before sampling. OSIRIS-REx will develop its photometric model (including Lommel-Seelinger, ROLO, McEwen, Minnaert and Akimov) of Bennu with OCAM and OVIRS during the Detailed Survey mission phase. The model developed during this phase will be used to photometrically correct the OCAM and OVIRS data.Here we present the analysis of the error for the photometric corrections. Based on our testing data sets, we find:1. The model uncertainties is only correct when we use the covariance matrix to calculate, because the parameters are highly correlated.2. No evidence of domination of any parameter in each model.3. And both model error and the data error contribute to the final correction error comparably.4. We tested the uncertainty module on fake and real data sets, and find that model performance depends on the data coverage and data quality. These tests gave us a better understanding of how different model behave in different case.5. L-S model is more reliable than others. Maybe because the simulated data are based on L-S model. However, the test on real data (SPDIF) does show slight advantage of L-S, too. ROLO is not reliable to use when calculating bond albedo. The uncertainty of McEwen model is big in most cases. Akimov performs unphysical on SOPIE 1 data.6. Better use L-S as our default choice, this conclusion is based mainly on our test on SOPIE data and IPDIF.
Bitter, T; Khan, I; Marriott, T; Schreurs, B W; Verdonschot, N; Janssen, D
2016-03-01
The modular taper junction in total hip replacements has been implicated as a possible source of wear. The finite-element (FE) method can be used to study the wear potential at the taper junction. For such simulations it is important to implement representative contact parameters, in order to achieve accurate results. One of the main parameters in FE simulations is the coefficient of friction. However, in current literature, there is quite a wide spread in coefficient of friction values (0.15 - 0.8), which has a significant effect on the outcome of the FE simulations. Therefore, to obtain more accurate results, one should use a coefficient of friction that is determined for the specific material couple being analyzed. In this study, the static coefficient of friction was determined for two types of titanium-on-titanium stem-adaptor couples, using actual cut-outs of the final implants, to ensure that the coefficient of friction was determined consistently for the actual implant material and surface finish characteristics. Two types of tapers were examined, Biomet type-1 and 12/14, where type-1 has a polished surface finish and the 12/14 is a microgrooved system. We found static coefficients of friction of 0.19 and 0.29 for the 12/14 and type-1 stem-adaptor couples, respectively.
Shahbazi, Sara; Zamanian, Ali; Pazouki, Mohammad; Jafari, Yaser
2018-05-01
A new total biomimetic technique based on both the water uptake and degradation processes is introduced in this study to provide an interesting procedure to fabricate a bioactive and biodegradable synthetic scaffold, which has a good mechanical and structural properties. The optimization of effective parameters to scaffold fabrication was done by response surface methodology/central composite design (CCD). With this method, a synthetic scaffold was fabricated which has a uniform and open-interconnected porous structure with the largest pore size of 100-200μm. The obtained compressive ultimate strength of ~35MPa and compression modulus of 58MPa are similar to some of the trabecular bone. The pore morphology, size, and distribution of the scaffold were characterized using a scanning electron microscope and mercury porosimeter. Fourier transform infrared spectroscopy, EDAX and X-ray diffraction analyses were used to determine the chemical composition, Ca/P element ratio of mineralized microparticles, and the crystal structure of the scaffolds, respectively. The optimum biodegradable synthetic scaffold based on its raw materials of polypropylene fumarate, hydroxyethyl methacrylate and nano bioactive glass (PPF/HEMA/nanoBG) as 70/30wt/wt%, 20wt%, and 1.5wt/wt% (PHB.732/1.5) with desired porosity, pore size, and geometry were created by 4weeks immersion in SBF. This scaffold showed considerable biocompatibility in the ranging from 86 to 101% for the indirect and direct contact tests and good osteoblast cell attachment when studied with the bone-like cells. Copyright © 2018 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Balaji Zacharia
2017-07-01
Full Text Available ABSTRACT OBJECTIVES Anterior knee pain is a common problem in patients who have undergone TKR which causes dissatisfaction among them. There are Various methods for prevention of anterior knee pain following TKR .The objective of this study is to determine the effect of circumpatellar electrocautery on anterior knee pain following TKR and to compare the results with that of those patients who have undergone TKR without circumpatellar denervation. METHODS This is a cohort study conducted in Dept. of Orthopedics, Govt. Medical College, Kozhikode,kerala, 2014. Total sample size was 90.out of which 2 patients died during the study period. We lost follow up of 7 patients. Among the remaining 81 patients 42 had undergone TKR with circumpatellar denervation using electocautery and 39 without circumpatellar denervation. They were kept under follow up. Patients were followed up postoperatively at 1 month, 3 months, 6 months and at one year. At all postoperative visits, a clinical score was determined using the Knee Society score and the clinical anterior knee pain rating system described by Waters and Bentley RESULTS There is no statistically significant difference in AKP score between both groups.There is a statistically significant difference in the knee society score at 1st month(p value <.001. But there is no difference on further follow up visits . CONCLUSION There is no statistically significant difference between final outcome of patients who underwent patella denervation using circumpatellar electrocauterisation and those without denervation with respect to anterior knee pain among patients who have undergone TKR.
Energy Technology Data Exchange (ETDEWEB)
Freixa, Jordi, E-mail: jordi.freixa-terradas@upc.edu; Alfonso, Elsa de, E-mail: elsa.de.alfonso@upc.edu; Reventós, Francesc, E-mail: francesc.reventos@upc.edu
2016-08-15
Highlights: • Uncertainty of physical models are a key issue in Best estimate plus uncertainty analysis. • Estimation of uncertainties of physical models of thermal hydraulics system codes. • Comparison of CIRCÉ and FFTBM methodologies. • Simulation of reflood experiments in order to evaluate uncertainty of physical models related to the reflood scenario. - Abstract: The increasing importance of Best-Estimate Plus Uncertainty (BEPU) analyses in nuclear safety and licensing processes have lead to several international activities. The latest findings highlighted the uncertainties of physical models as one of the most controversial aspects of BEPU. This type of uncertainties is an important contributor to the total uncertainty of NPP BE calculations. Due to the complexity of estimating this uncertainty, it is often assessed solely by engineering judgment. The present study comprises a comparison of two different state-of-the-art methodologies CIRCÉ and IPREM (FFTBM) capable of quantifying the uncertainty of physical models. Similarities and differences of their results are discussed through the observation of probability distribution functions and envelope calculations. In particular, the analyzed scenario is core reflood. Experimental data from the FEBA and PERICLES test facilities is employed while the thermal hydraulic simulations are carried out with RELAP5/mod3.3. This work is undertaken under the framework of PREMIUM (Post-BEMUSE Reflood Model Input Uncertainty Methods) benchmark.
Stereo-particle image velocimetry uncertainty quantification
International Nuclear Information System (INIS)
Bhattacharya, Sayantan; Vlachos, Pavlos P; Charonko, John J
2017-01-01
Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric
Citizen Candidates Under Uncertainty
Eguia, Jon X.
2005-01-01
In this paper we make two contributions to the growing literature on "citizen-candidate" models of representative democracy. First, we add uncertainty about the total vote count. We show that in a society with a large electorate, where the outcome of the election is uncertain and where winning candidates receive a large reward from holding office, there will be a two-candidate equilibrium and no equilibria with a single candidate. Second, we introduce a new concept of equilibrium, which we te...
Simplified propagation of standard uncertainties
International Nuclear Information System (INIS)
Shull, A.H.
1997-01-01
An essential part of any measurement control program is adequate knowledge of the uncertainties of the measurement system standards. Only with an estimate of the standards'' uncertainties can one determine if the standard is adequate for its intended use or can one calculate the total uncertainty of the measurement process. Purchased standards usually have estimates of uncertainty on their certificates. However, when standards are prepared and characterized by a laboratory, variance propagation is required to estimate the uncertainty of the standard. Traditional variance propagation typically involves tedious use of partial derivatives, unfriendly software and the availability of statistical expertise. As a result, the uncertainty of prepared standards is often not determined or determined incorrectly. For situations meeting stated assumptions, easier shortcut methods of estimation are now available which eliminate the need for partial derivatives and require only a spreadsheet or calculator. A system of simplifying the calculations by dividing into subgroups of absolute and relative uncertainties is utilized. These methods also incorporate the International Standards Organization (ISO) concepts for combining systematic and random uncertainties as published in their Guide to the Expression of Measurement Uncertainty. Details of the simplified methods and examples of their use are included in the paper
Calibration Under Uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Swiler, Laura Painton; Trucano, Timothy Guy
2005-03-01
This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.
Uncertainty Analyses and Strategy
International Nuclear Information System (INIS)
Kevin Coppersmith
2001-01-01
The DOE identified a variety of uncertainties, arising from different sources, during its assessment of the performance of a potential geologic repository at the Yucca Mountain site. In general, the number and detail of process models developed for the Yucca Mountain site, and the complex coupling among those models, make the direct incorporation of all uncertainties difficult. The DOE has addressed these issues in a number of ways using an approach to uncertainties that is focused on producing a defensible evaluation of the performance of a potential repository. The treatment of uncertainties oriented toward defensible assessments has led to analyses and models with so-called ''conservative'' assumptions and parameter bounds, where conservative implies lower performance than might be demonstrated with a more realistic representation. The varying maturity of the analyses and models, and uneven level of data availability, result in total system level analyses with a mix of realistic and conservative estimates (for both probabilistic representations and single values). That is, some inputs have realistically represented uncertainties, and others are conservatively estimated or bounded. However, this approach is consistent with the ''reasonable assurance'' approach to compliance demonstration, which was called for in the U.S. Nuclear Regulatory Commission's (NRC) proposed 10 CFR Part 63 regulation (64 FR 8640 [DIRS 101680]). A risk analysis that includes conservatism in the inputs will result in conservative risk estimates. Therefore, the approach taken for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) provides a reasonable representation of processes and conservatism for purposes of site recommendation. However, mixing unknown degrees of conservatism in models and parameter representations reduces the transparency of the analysis and makes the development of coherent and consistent probability statements about projected repository
Directory of Open Access Journals (Sweden)
Kátia Cylene Guimarães
2001-06-01
Full Text Available Os objetivos deste trabalho foram avaliar os efeitos de dois períodos experimentais e dois níveis de volumosos na dieta sobre a digestibilidade total e parcial da matéria seca (MS, matéria orgânica (MO, proteína bruta (PB, fibra em detergente ácido (FDA, fibra em detergente neutro (FDN, energia bruta (EB e amido. Foram utilizados quatro novilhos da raça Holandês Preto e Branco, com dois anos de idade e 340 kg de peso vivo e canulados no rúmen e duodeno. O delineamento experimental utilizado foi o quadrado latino 4 x 4, em que os animais receberam quatro tratamentos, que consistiram de dois níveis de volumosos (30 e 70% e dois períodos experimentais (14 e 21 dias. Houve efeito significativo do nível de volumoso sobre a digestibilidade ruminal de FDA e FDN, a digestibilidade intestinal e total da MS, MO, PB e amido e a digestibilidade total da EB. Não houve efeito do período experimental sobre os coeficientes de digestibilidade dos nutrientes avaliados. Conclui-se que a utilização de período experimental de 14 dias, em experimentos de digestão, é viável, quando se utiliza feno como fonte de volumoso.The objective of this research was to evaluate the effects of two experimental periods and two forage levels in the diet on the total and parcial apparent digestibility of dry matter (DM, organic matter (OM, crude protein (CP, acid detergent fiber (ADF, neutral detergent fiber (NDF, gross energy (GE and starch. Four Holstein steers, averaging two years old and 340 kg of body weight, ruminally and duodenally cannulated, were used. The experimental design was a 4 x 4 latin square and the animals received four treatments as following: two forage levels (30 and 70% and two experimental periods (14 and 21 days. There was effect of forage level on intestinal and total digestibility of DM, OM, CP and starch and on total digestibility of GE. There was effect of forage level on ruminal digestibility of ADF and NDF. There was no effect of
Evaluation of the 238U neutron total cross section
International Nuclear Information System (INIS)
Smith, A.; Poenitz, W.P.; Howerton, R.J.
1982-12-01
Experimental energy-averaged neutron total cross sections of 238 U were evaluated from 0.044 to 20.0 MeV using regorous numerical methods. The evaluated results are presented together with the associated uncertainties and correlation matrix. They indicate that this energy-averaged neutron total cross section is known to better than 1% over wide energy regions. There are somwewhat larger uncertainties at low energies (e.g., less than or equal to 0.2 MeV), near 8 MeV and above 15 MeV. The present evaluation is compard with values given in ENDF/B-V
Model uncertainty in safety assessment
International Nuclear Information System (INIS)
Pulkkinen, U.; Huovinen, T.
1996-01-01
The uncertainty analyses are an essential part of any risk assessment. Usually the uncertainties of reliability model parameter values are described by probability distributions and the uncertainty is propagated through the whole risk model. In addition to the parameter uncertainties, the assumptions behind the risk models may be based on insufficient experimental observations and the models themselves may not be exact descriptions of the phenomena under analysis. The description and quantification of this type of uncertainty, model uncertainty, is the topic of this report. The model uncertainty is characterized and some approaches to model and quantify it are discussed. The emphasis is on so called mixture models, which have been applied in PSAs. Some of the possible disadvantages of the mixture model are addressed. In addition to quantitative analyses, also qualitative analysis is discussed shortly. To illustrate the models, two simple case studies on failure intensity and human error modeling are described. In both examples, the analysis is based on simple mixture models, which are observed to apply in PSA analyses. (orig.) (36 refs., 6 figs., 2 tabs.)
Model uncertainty in safety assessment
Energy Technology Data Exchange (ETDEWEB)
Pulkkinen, U; Huovinen, T [VTT Automation, Espoo (Finland). Industrial Automation
1996-01-01
The uncertainty analyses are an essential part of any risk assessment. Usually the uncertainties of reliability model parameter values are described by probability distributions and the uncertainty is propagated through the whole risk model. In addition to the parameter uncertainties, the assumptions behind the risk models may be based on insufficient experimental observations and the models themselves may not be exact descriptions of the phenomena under analysis. The description and quantification of this type of uncertainty, model uncertainty, is the topic of this report. The model uncertainty is characterized and some approaches to model and quantify it are discussed. The emphasis is on so called mixture models, which have been applied in PSAs. Some of the possible disadvantages of the mixture model are addressed. In addition to quantitative analyses, also qualitative analysis is discussed shortly. To illustrate the models, two simple case studies on failure intensity and human error modeling are described. In both examples, the analysis is based on simple mixture models, which are observed to apply in PSA analyses. (orig.) (36 refs., 6 figs., 2 tabs.).
Bragdon, Charles R; Malchau, Henrik; Yuan, Xunhua; Perinchief, Rebecca; Kärrholm, Johan; Börlin, Niclas; Estok, Daniel M; Harris, William H
2002-07-01
The purpose of this study was to develop and test a phantom model based on actual total hip replacement (THR) components to simulate the true penetration of the femoral head resulting from polyethylene wear. This model was used to study both the accuracy and the precision of radiostereometric analysis, RSA, in measuring wear. We also used this model to evaluate optimum tantalum bead configuration for this particular cup design when used in a clinical setting. A physical model of a total hip replacement (a phantom) was constructed which could simulate progressive, three-dimensional (3-D) penetration of the femoral head into the polyethylene component of a THR. Using a coordinate measuring machine (CMM) the positioning of the femoral head using the phantom was measured to be accurate to within 7 microm. The accuracy and precision of an RSA analysis system was determined from five repeat examinations of the phantom using various experimental set-ups of the phantom. The accuracy of the radiostereometric analysis, in this optimal experimental set-up studied was 33 microm for the medial direction, 22 microm for the superior direction, 86 microm for the posterior direction and 55 microm for the resultant 3-D vector length. The corresponding precision at the 95% confidence interval of the test results for repositioning the phantom five times, measured 8.4 microm for the medial direction, 5.5 microm for the superior direction, 16.0 microm for the posterior direction, and 13.5 microm for the resultant 3-D vector length. This in vitro model is proposed as a useful tool for developing a standard for the evaluation of radiostereometric and other radiographic methods used to measure in vivo wear.
Energy Technology Data Exchange (ETDEWEB)
Martinelli, T; Panini, G C [ENEA - Dipartimento Tecnologie Intersettoriali di Base, Centro Ricerche Energia, Casaccia (Italy); Amoroso, A [Ricercatore Ospite (Italy)
1989-11-15
Information about systematic errors are not given In EXFOR, the data base of nuclear experimental measurements: their assessment is committed to the ability of the evaluator. A tool Is needed which performs this task in a fully automatic way or, at least, gives a valuable aid. The expert system ERESYE has been implemented for investigating the feasibility of an automatic evaluation of the systematic errors in the experiments. The features of the project which led to the implementation of the system are presented. (author)
International Nuclear Information System (INIS)
Landsberg, P.T.
1990-01-01
This paper explores how the quantum mechanics uncertainty relation can be considered to result from measurements. A distinction is drawn between the uncertainties obtained by scrutinising experiments and the standard deviation type of uncertainty definition used in quantum formalism. (UK)
Spence, Suzanne; Delve, Jennifer; Stamp, Elaine; Matthews, John N S; White, Martin; Adamson, Ashley J
2013-01-01
In 2005, the nutritional content of children's school lunches in England was widely criticised, leading to a major policy change in 2006. Food and nutrient-based standards were reintroduced requiring primary schools to comply by September 2008. We aimed to determine the effect of the policy on the nutritional content at lunchtime and in children's total diet. We undertook a natural experimental evaluation, analysing data from cross-sectional surveys in 12 primary schools in North East England, pre and post policy. Dietary data were collected on four consecutive days from children aged 4-7 years (n = 385 in 2003-4; n = 632 in 2008-9). We used linear mixed effect models to analyse the effects of gender, year, and lunch type on children's mean total daily intake. Both pre- and post-implementation, children who ate a school lunch consumed less sodium (mean change -128 mg, 95% CI: -183 to -73 mg) in their total diet than children eating home-packed lunches. Post-implementation, children eating school lunches consumed a lower % energy from fat (-1.8%, -2.8 to -0.9) and saturated fat (-1.0%; -1.6 to -0.5) than children eating packed lunches. Children eating school lunches post implementation consumed significantly more carbohydrate (16.4 g, 5.3 to 27.6), protein (3.6 g, 1.1 to 6.0), non-starch polysaccharides (1.5 g, 0.5 to 1.9), vitamin C (0.7 mg, 0.6 to 0.8), and folate (12.3 µg, 9.7 to 20.4) in their total diet than children eating packed lunches. Implementation of school food policy standards was associated with significant improvements in the nutritional content of school lunches; this was reflected in children's total diet. School food- and nutrient-based standards can play an important role in promoting dietary health and may contribute to tackling childhood obesity. Similar policy measures should be considered for other environments influencing children's diet.
Directory of Open Access Journals (Sweden)
Suzanne Spence
Full Text Available In 2005, the nutritional content of children's school lunches in England was widely criticised, leading to a major policy change in 2006. Food and nutrient-based standards were reintroduced requiring primary schools to comply by September 2008. We aimed to determine the effect of the policy on the nutritional content at lunchtime and in children's total diet. We undertook a natural experimental evaluation, analysing data from cross-sectional surveys in 12 primary schools in North East England, pre and post policy. Dietary data were collected on four consecutive days from children aged 4-7 years (n = 385 in 2003-4; n = 632 in 2008-9. We used linear mixed effect models to analyse the effects of gender, year, and lunch type on children's mean total daily intake. Both pre- and post-implementation, children who ate a school lunch consumed less sodium (mean change -128 mg, 95% CI: -183 to -73 mg in their total diet than children eating home-packed lunches. Post-implementation, children eating school lunches consumed a lower % energy from fat (-1.8%, -2.8 to -0.9 and saturated fat (-1.0%; -1.6 to -0.5 than children eating packed lunches. Children eating school lunches post implementation consumed significantly more carbohydrate (16.4 g, 5.3 to 27.6, protein (3.6 g, 1.1 to 6.0, non-starch polysaccharides (1.5 g, 0.5 to 1.9, vitamin C (0.7 mg, 0.6 to 0.8, and folate (12.3 µg, 9.7 to 20.4 in their total diet than children eating packed lunches. Implementation of school food policy standards was associated with significant improvements in the nutritional content of school lunches; this was reflected in children's total diet. School food- and nutrient-based standards can play an important role in promoting dietary health and may contribute to tackling childhood obesity. Similar policy measures should be considered for other environments influencing children's diet.
Pagliano, Enea; Mester, Zoltán; Meija, Juris
2013-03-01
Since its introduction a century ago, isotope dilution analysis has played a central role in developments of analytical chemistry. This method has witnessed many elaborations and developments over the years. To date, we have single, double, and even triple isotope dilution methods. In this manuscript, we summarize the conceptual aspects of isotope dilution methods and introduce the quadruple dilution and the concept of exact matching triple and quadruple dilutions. The comparison of isotope dilution methods is performed by determination of bromide ions in groundwater using novel ethyl-derivatization chemistry in conjunction with GC/MS. We show that the benefits of higher-order isotope dilution methods are countered with a greater need for careful experimental design of the isotopic blends. Just as for ID(2)MS, ID(3)MS and ID(4)MS perform best when the isotope ratio of one sample/spike blend is matched with that of a standard/spike blend (exact matching).
Report on the uncertainty methods study
International Nuclear Information System (INIS)
1998-06-01
The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI, has compared five methods for calculating the uncertainty in the predictions of advanced 'best estimate' thermal-hydraulic codes: the Pisa method (based on extrapolation from integral experiments) and four methods identifying and combining input uncertainties. Three of these, the GRS, IPSN and ENUSA methods, use subjective probability distributions, and one, the AEAT method, performs a bounding analysis. Each method has been used to calculate the uncertainty in specified parameters for the LSTF SB-CL-18 5% cold leg small break LOCA experiment in the ROSA-IV Large Scale Test Facility (LSTF). The uncertainty analysis was conducted essentially blind and the participants did not use experimental measurements from the test as input apart from initial and boundary conditions. Participants calculated uncertainty ranges for experimental parameters including pressurizer pressure, primary circuit inventory and clad temperature (at a specified position) as functions of time
Uncertainty analysis of a nondestructive radioassay system for transuranic waste
International Nuclear Information System (INIS)
Harker, Y.D.; Blackwood, L.G.; Meachum, T.R.; Yoon, W.Y.
1996-01-01
Radioassay of transuranic waste in 207 liter drums currently stored at the Idaho National Engineering Laboratory is achieved using a Passive Active Neutron (PAN) nondestructive assay system. In order to meet data quality assurance requirements for shipping and eventual permanent storage of these drums at the Waste Isolation Pilot Plant in Carlsbad, New Mexico, the total uncertainty of the PAN system measurements must be assessed. In particular, the uncertainty calculations are required to include the effects of variations in waste matrix parameters and related variables on the final measurement results. Because of the complexities involved in introducing waste matrix parameter effects into the uncertainty calculations, standard methods of analysis (e.g., experimentation followed by propagation of errors) could not be implemented. Instead, a modified statistical sampling and verification approach was developed. In this modified approach the total performance of the PAN system is simulated using computer models of the assay system and the resultant output is compared with the known input to assess the total uncertainty. This paper describes the simulation process and illustrates its application to waste comprised of weapons grade plutonium-contaminated graphite molds
International Nuclear Information System (INIS)
Lamproglou, Ioannis; Magdelenat, Henri; Boisserie, Gilbert; Baillet, Francois; Mayo, Willy; Fessi, Hatem; Puisieux, Francis; Perderau, Bernard; Colas-Linhart, Nicole; Delattre, Jean-Yves
1998-01-01
Purpose: To develop an experimental model of acute encephalopathy following total body irradiation in rats and to define the therapeutic effect of liposome-entrapped Cu/Zn superoxide dismutase. Methods and Materials: A total of 120 4-month-old rats received 4.5 Gy total body irradiation (TBI) while 120 rats received sham irradiation. A behavioral study based on a conditioning test of negative reinforcement, the one-way avoidance test, was performed 5 hours before irradiation and repeated the following days. Subcutaneous treatment was started 1 hour after irradiation and repeated daily for 2 weeks. In both the irradiated and sham group, three subgroups were defined according to the treatment received: liposome-entrapped Cu/Zn superoxide dismutase (0.5 mg/kg), liposomes only, normal saline. Results: This work comprised two consecutive studies. In study A (90 rats) the one-way avoidance test was administered daily from day 0 to day 4 with a recall session at day 14. In study B (validation phase in 150 rats) the behavioral test was performed only from day 0 to day 6. Before irradiation, all rats showed a similar behavioral response. Study A (6 groups of 15 rats): Following TBI, irradiated rats treated with liposomes only or saline demonstrated a significant delay in learning the one-way avoidance test in comparison with sham-irradiated rats (0.05 < p <0.001 depending upon the day of evaluation and the subgroup type). In contrast, irradiated rats treated with liposome-entrapped Cu/Zn superoxide dismutase did not differ from sham-irradiated rats. Study B (6 groups of 25 rats): The results were the same as those in study A, demonstrating a significant delay in the learning of the test in the liposome and saline-treated irradiated rats in comparison with sham-irradiated rats (0.02 < p < 0.001). The irradiated rats, treated with liposome-entrapped Cu/Zn superoxide dismutase did not differ from the sham-irradiated controls. Conclusion: This study indicates that a relatively
The uncertainties in estimating measurement uncertainties
International Nuclear Information System (INIS)
Clark, J.P.; Shull, A.H.
1994-01-01
All measurements include some error. Whether measurements are used for accountability, environmental programs or process support, they are of little value unless accompanied by an estimate of the measurements uncertainty. This fact is often overlooked by the individuals who need measurements to make decisions. This paper will discuss the concepts of measurement, measurements errors (accuracy or bias and precision or random error), physical and error models, measurement control programs, examples of measurement uncertainty, and uncertainty as related to measurement quality. Measurements are comparisons of unknowns to knowns, estimates of some true value plus uncertainty; and are no better than the standards to which they are compared. Direct comparisons of unknowns that match the composition of known standards will normally have small uncertainties. In the real world, measurements usually involve indirect comparisons of significantly different materials (e.g., measuring a physical property of a chemical element in a sample having a matrix that is significantly different from calibration standards matrix). Consequently, there are many sources of error involved in measurement processes that can affect the quality of a measurement and its associated uncertainty. How the uncertainty estimates are determined and what they mean is as important as the measurement. The process of calculating the uncertainty of a measurement itself has uncertainties that must be handled correctly. Examples of chemistry laboratory measurement will be reviewed in this report and recommendations made for improving measurement uncertainties
Research of relationship between uncertainty and investment
Institute of Scientific and Technical Information of China (English)
MENG Li; WANG Ding-wei
2005-01-01
This study focuses on revealing the relationship between uncertainty and investment probability through real option model involving investment critical trigger and project earning. Use of Matlab software on the experimental results showing that project earning volatility influences investment probability, led the authors to conclude that this notion is not always correct, as increasing uncertainty should have an inhibiting effect on investment, and that in certain situation, increasing uncertainty actually increases the investment probability and so, should have positive impact on investment.
Uncertainty Characterization of Reactor Vessel Fracture Toughness
International Nuclear Information System (INIS)
Li, Fei; Modarres, Mohammad
2002-01-01
To perform fracture mechanics analysis of reactor vessel, fracture toughness (K Ic ) at various temperatures would be necessary. In a best estimate approach, K Ic uncertainties resulting from both lack of sufficient knowledge and randomness in some of the variables of K Ic must be characterized. Although it may be argued that there is only one type of uncertainty, which is lack of perfect knowledge about the subject under study, as a matter of practice K Ic uncertainties can be divided into two types: aleatory and epistemic. Aleatory uncertainty is related to uncertainty that is very difficult to reduce, if not impossible; epistemic uncertainty, on the other hand, can be practically reduced. Distinction between aleatory and epistemic uncertainties facilitates decision-making under uncertainty and allows for proper propagation of uncertainties in the computation process. Typically, epistemic uncertainties representing, for example, parameters of a model are sampled (to generate a 'snapshot', single-value of the parameters), but the totality of aleatory uncertainties is carried through the calculation as available. In this paper a description of an approach to account for these two types of uncertainties associated with K Ic has been provided. (authors)
On the relationship between micro and macro correlations in nuclear measurement uncertainties
International Nuclear Information System (INIS)
Smith, D.L.
1987-01-01
Consideration is given to the propagation of micro correlations between the component experimental errors (corresponding to diverse attributes of the measurement process) through to the macro correlations between the total errors in the final derived experimental values. Whenever certain micro correlations cannot be precisely specified, the macro correlations must also be uncertain. However, on the basis of fundamental principles from mathematical statistics, it is shown that these uncertainties in the macro correlations can be substantially smaller than the individual uncertainties for specific micro correlations, provided that the number of distinct attributes contributing to the total experimental error is reasonably large. Furthermore, the resulting macro correlations are shown to be approximately normally distributed regardless of teh distributions assumed for the micro correlations. Examples are provided to demonstrate these concepts and to illustrate their relevance to experimental nuclear research. (orig.)
International Nuclear Information System (INIS)
Berezin, F.N.; Kisurin, V.A.; Nemets, O.F.; Ofengenden, R.G.; Pugach, V.M.; Pavlenko, Yu.N.; Patlan', Yu.V.; Savrasov, S.S.
1981-01-01
Experimental technique for investigation of three-particle nuclear reactions in kinematically total experiments is described. The technique provides the storage of one-dimensional and two- dimensional energy spectra from several detectors. A block diagram of the measuring system, using this technique, is presented. The measuring system consists of analog equipment for rapid-slow coincidences and of a two-processor complex on the base of the M-400 computer with a general bus. Application of a two-processor complex, each computer of which has a possibility of direct access to memory of another computer, permits to separate functions of data collection and data operational presentation and to perform necessary physical calculations. Software of the measuring complex which includes programs written using the ASSEMBLER language for the first computer and functional programs written using the BASIC language for the second computer, is considered. Software of the first computer includes the DISPETCHER dialog control program, driver package for control of external devices, of applied program package and system modules. The technique, described, is tested in experiment on investigation of d+ 10 B→α+α+α three- particle reaction at deutron energy of 13.6 MeV. The two-dimensional energy spectrum reaction obtained with the help of the technique described is presented [ru
Uncertainty in social dilemmas
Kwaadsteniet, Erik Willem de
2007-01-01
This dissertation focuses on social dilemmas, and more specifically, on environmental uncertainty in these dilemmas. Real-life social dilemma situations are often characterized by uncertainty. For example, fishermen mostly do not know the exact size of the fish population (i.e., resource size uncertainty). Several researchers have therefore asked themselves the question as to how such uncertainty influences people’s choice behavior. These researchers have repeatedly concluded that uncertainty...
Nuclear Physical Uncertainties in Modeling X-Ray Bursts
Regis, Eric; Amthor, A. Matthew
2017-09-01
Type I x-ray bursts occur when a neutron star accretes material from the surface of another star in a compact binary star system. For certain accretion rates and material compositions, much of the nuclear material is burned in short, explosive bursts. Using a one-dimensional stellar model, Kepler, and a comprehensive nuclear reaction rate library, ReacLib, we have simulated chains of type I x-ray bursts. Unfortunately, there are large remaining uncertainties in the nuclear reaction rates involved, since many of the isotopes reacting are unstable and have not yet been studied experimentally. Some individual reactions, when varied within their estimated uncertainty, alter the light curves dramatically. This limits our ability to understand the structure of the neutron star. Previous studies have looked at the effects of individual reaction rate uncertainties. We have applied a Monte Carlo method ``-simultaneously varying a set of reaction rates'' -in order to probe the expected uncertainty in x-ray burst behaviour due to the total uncertainty in all nuclear reaction rates. Furthermore, we aim to discover any nonlinear effects due to the coupling between different reaction rates. Early results show clear non-linear effects. This research was made possible by NSF-DUE Grant 1317446, BUScholars Program.
Propagation of dynamic measurement uncertainty
International Nuclear Information System (INIS)
Hessling, J P
2011-01-01
The time-dependent measurement uncertainty has been evaluated in a number of recent publications, starting from a known uncertain dynamic model. This could be defined as the 'downward' propagation of uncertainty from the model to the targeted measurement. The propagation of uncertainty 'upward' from the calibration experiment to a dynamic model traditionally belongs to system identification. The use of different representations (time, frequency, etc) is ubiquitous in dynamic measurement analyses. An expression of uncertainty in dynamic measurements is formulated for the first time in this paper independent of representation, joining upward as well as downward propagation. For applications in metrology, the high quality of the characterization may be prohibitive for any reasonably large and robust model to pass the whiteness test. This test is therefore relaxed by not directly requiring small systematic model errors in comparison to the randomness of the characterization. Instead, the systematic error of the dynamic model is propagated to the uncertainty of the measurand, analogously but differently to how stochastic contributions are propagated. The pass criterion of the model is thereby transferred from the identification to acceptance of the total accumulated uncertainty of the measurand. This increases the relevance of the test of the model as it relates to its final use rather than the quality of the calibration. The propagation of uncertainty hence includes the propagation of systematic model errors. For illustration, the 'upward' propagation of uncertainty is applied to determine if an appliance box is damaged in an earthquake experiment. In this case, relaxation of the whiteness test was required to reach a conclusive result
On the uncertainty principle. V
International Nuclear Information System (INIS)
Halpern, O.
1976-01-01
The treatment of ideal experiments connected with the uncertainty principle is continued. The author analyzes successively measurements of momentum and position, and discusses the common reason why the results in all cases differ from the conventional ones. A similar difference exists for the measurement of field strengths. The interpretation given by Weizsaecker, who tried to interpret Bohr's complementarity principle by introducing a multi-valued logic is analyzed. The treatment of the uncertainty principle ΔE Δt is deferred to a later paper as is the interpretation of the method of variation of constants. Every ideal experiment discussed shows various lower limits for the value of the uncertainty product which limits depend on the experimental arrangement and are always (considerably) larger than h. (Auth.)
Huang, Xue-Tao; Wang, Bin; Zhang, Wen-Hua; Peng, Man-Qiang; Lin, Ding
2018-01-01
Total glucosides of paeony (TGP) are active components extracted from the roots of Paeonia lactiflora Pall. In this study, we investigated the role and mechanisms of TGP in experimental autoimmune uveitis (EAU) model of mice. The C57BL/6 mice were randomly divided into three groups: sham group, EAU-control group, and EAU-TGP group. Clinical score of images of the eye fundus were taken on 7, 14, 21, and 28 days after induction of EAU. The concentrations of proinflammatory cytokines in intraocular fluid were measured at 14 days after EAU induction with the use of a multiplex assay system. Flow cytometry was used to analyze the frequency of CD4+, CD8+, interferon-gamma (IFN-γ), and CD4+/CD8+ ratio in spleen and lymph nodes. Western blotting was used to measure expressions of mitogen-activated protein kinase (MAPK) pathway-related proteins in retina. Clinical scores for uveitis were lower in TGP-treated EAU mice than those without TGP treatment. Importantly, the concentrations of cytokines induced by T-helper 1 (Th1) and T-helper 2 (Th2) cells in intraocular fluid were reduced in EAU mice treated with TGP. Furthermore, the frequency of CD4+, IFN-γ, and CD4+/CD8+ ratio was decreased and the frequency of CD8+ was increased in spleen and lymph nodes of mice treated with TGP. The anti-inflammatory effects of TGP were mediated by inhibiting the MAPK signaling pathways. Our results showed that TGP suppressed uveitis in mice via the inhibition of Th1 and Th2 cell function. Thus, TGP may be a promising therapeutic strategy for uveitis, as well as other ocular inflammatory diseases.
International Nuclear Information System (INIS)
Martino, Jacques.
1977-01-01
The method comparing μ+ and μ - lifetimes is shown to be a possible interesting method for measuring the capture rate of μ - in liquid hydrogen. The first part of the report is dealing with a study of the initial state of the μp system and a calculation of the capture rate, then with the μp system interaction with impurities, and the interest lying in a comparative study of neutron capture. The second part is dealing with the experimental unit: target, electron counters and neutron detectors. The total capture rates in two lithium isotopes were obtained by the method. The values obtained are 4678 +- 104 s -1 with 6 Li and 2260 +- 104s -1 for 7 Li. The interest of the method envisaged lies in the fact that it involves only time measurements on the nuon decay electrons and positrons. No precise knowledge of the neutron detector efficiency is required (an essential limitation to previous measurements). The 3% accuracy obtained for the capture rate is an important improvement. The initial condition investigation allowed the permissible impurity contaminations in liquid hydrogen to be determined, in the event of a measurement beginning 1.5 μs after the end of the beam pulse. Nitrogen and rare gases must not, overcome 8.10 -9 concentration (Pd filter). The deuterium concentration must be lower than 3.10 -6 (use of very pure hydrogen 'protium'). The system measured is the pμp molecule the event of an ortho-para transition of this molecule being badly known, a measurement of the time distribution of the capture neutrons makes the phenomenon clearer, without the neutron detector efficiency being accurately known [fr
Summary of existing uncertainty methods
International Nuclear Information System (INIS)
Glaeser, Horst
2013-01-01
A summary of existing and most used uncertainty methods is presented, and the main features are compared. One of these methods is the order statistics method based on Wilks' formula. It is applied in safety research as well as in licensing. This method has been first proposed by GRS for use in deterministic safety analysis, and is now used by many organisations world-wide. Its advantage is that the number of potential uncertain input and output parameters is not limited to a small number. Such a limitation was necessary for the first demonstration of the Code Scaling Applicability Uncertainty Method (CSAU) by the United States Regulatory Commission (USNRC). They did not apply Wilks' formula in their statistical method propagating input uncertainties to obtain the uncertainty of a single output variable, like peak cladding temperature. A Phenomena Identification and Ranking Table (PIRT) was set up in order to limit the number of uncertain input parameters, and consequently, the number of calculations to be performed. Another purpose of such a PIRT process is to identify the most important physical phenomena which a computer code should be suitable to calculate. The validation of the code should be focused on the identified phenomena. Response surfaces are used in some applications replacing the computer code for performing a high number of calculations. The second well known uncertainty method is the Uncertainty Methodology Based on Accuracy Extrapolation (UMAE) and the follow-up method 'Code with the Capability of Internal Assessment of Uncertainty (CIAU)' developed by the University Pisa. Unlike the statistical approaches, the CIAU does compare experimental data with calculation results. It does not consider uncertain input parameters. Therefore, the CIAU is highly dependent on the experimental database. The accuracy gained from the comparison between experimental data and calculated results are extrapolated to obtain the uncertainty of the system code predictions
Projected uranium measurement uncertainties for the Gas Centrifuge Enrichment Plant
International Nuclear Information System (INIS)
Younkin, J.M.
1979-02-01
An analysis was made of the uncertainties associated with the measurements of the declared uranium streams in the Portsmouth Gas Centrifuge Enrichment Plant (GCEP). The total uncertainty for the GCEP is projected to be from 54 to 108 kg 235 U/year out of a measured total of 200,000 kg 235 U/year. The systematic component of uncertainty of the UF 6 streams is the largest and the dominant contributor to the total uncertainty. A possible scheme for reducing the total uncertainty is given
Experimental Uncertainty Associated with Traveling Wave Excitation
2014-09-15
center. The rotor was machined from a 3.24 mm thick, precision ground 4140 steel plate. The disk has a diameter of 300 mm and the blades are 92 mm...bladed disk,” Journal of Engineering for Gas Turbines and Power, vol. 123, pp. 940–950, 2001. [10] 40th AIAA/ASME/ SAE /ASEE Joint Propulsion Conference
The EURACOS activation experiments: preliminary uncertainty analysis
International Nuclear Information System (INIS)
Yeivin, Y.
1982-01-01
A sequence of counting rates of an irradiated sulphur pellet, r(tsub(i)), measured at different times after the end of the irradiation, are fitted to r(t)=Aexp(-lambda t)+B. A standard adjustment procedure is applied to determine the parameters A and B, their standard deviations and correlation, and chi square. It is demonstrated that if the counting-rate uncertainties are entirely due to the counting statistics, the experimental data are totally inconsistent with the ''theoretical'' model. However, assuming an additional systematic error of approximalety 1%, and eliminating a few ''bad'' data, produces a data set quite consistent with the model. The dependence of chi square on the assumed systematic error and the data elimination procedure are discussed in great detail. A review of the adjustment procedure is appended to the report
Some sources of the underestimation of evaluated cross section uncertainties
International Nuclear Information System (INIS)
Badikov, S.A.; Gai, E.V.
2003-01-01
The problem of the underestimation of evaluated cross-section uncertainties is addressed. Two basic sources of the underestimation of evaluated cross-section uncertainties - a) inconsistency between declared and observable experimental uncertainties and b) inadequacy between applied statistical models and processed experimental data - are considered. Both the sources of the underestimation are mainly a consequence of existence of the uncertainties unrecognized by experimenters. A model of a 'constant shift' is proposed for taking unrecognised experimental uncertainties into account. The model is applied for statistical analysis of the 238 U(n,f)/ 235 U(n,f) reaction cross-section ratio measurements. It is demonstrated that multiplication by sqrt(χ 2 ) as instrument for correction of underestimated evaluated cross-section uncertainties fails in case of correlated measurements. It is shown that arbitrary assignment of uncertainties and correlation in a simple least squares fit of two correlated measurements of unknown mean leads to physically incorrect evaluated results. (author)
International Nuclear Information System (INIS)
Andres, T.H.
2002-05-01
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
Energy Technology Data Exchange (ETDEWEB)
Andres, T.H
2002-05-01
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
Uncertainty and Cognitive Control
Directory of Open Access Journals (Sweden)
Faisal eMushtaq
2011-10-01
Full Text Available A growing trend of neuroimaging, behavioural and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1 There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2 There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3 The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the need for control; (4 Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders.
RUMINATIONS ON NDA MEASUREMENT UNCERTAINTY COMPARED TO DA UNCERTAINTY
Energy Technology Data Exchange (ETDEWEB)
Salaymeh, S.; Ashley, W.; Jeffcoat, R.
2010-06-17
It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.
Ruminations On NDA Measurement Uncertainty Compared TO DA Uncertainty
International Nuclear Information System (INIS)
Salaymeh, S.; Ashley, W.; Jeffcoat, R.
2010-01-01
It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.
Smina, T P; Nitha, B; Devasagayam, T P A; Janardhanan, K K
2017-01-01
Ganoderma lucidum total triterpenes were evaluated for its apoptosis-inducing and anti-cancer activities. Cytotoxicity and pro-apoptotic effect of total triterpenes were evaluated in human breast adenocarcinoma (MCF-7) cell line using MTT assay and DNA fragmentation analysis. Total triterpenes induced apoptosis in MCF-7 cells by down-regulating the levels of cyclin D1, Bcl-2, Bcl-xL and also by up-regulating the levels of Bax and caspase-9. Anti-carcinogenicity of total triterpenes was analysed using dimethyl benz [a] anthracene (DMBA) induced skin papilloma and mammary adenocarcinoma in Swiss albino mice and Wistar rats respectively. Topical application of 5mg, 10mg and 20mg total triterpenes reduced the incidence of skin papilloma by 62.5, 37.5 and 12.5% respectively. Incidence of the mammary tumour was also reduced significantly by 33.33, 66.67 and 16.67% in 10, 50 and 100mg/kg b.wt. total triterpenes treated animals respectively. Total triterpenes were also found to reduce the average number of tumours per animal and extended the tumour latency period in both the models. The results indicate the potential cytotoxicity and anti-cancerous activity of total triterpenes, there by opens up a path to the development of a safe and successive chemo preventive agent of natural origin. Copyright © 2016 Elsevier B.V. All rights reserved.
Results from the Application of Uncertainty Methods in the CSNI Uncertainty Methods Study (UMS)
International Nuclear Information System (INIS)
Glaeser, H.
2008-01-01
Within licensing procedures there is the incentive to replace the conservative requirements for code application by a - best estimate - concept supplemented by an uncertainty analysis to account for predictive uncertainties of code results. Methods have been developed to quantify these uncertainties. The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI, has compared five methods for calculating the uncertainty in the predictions of advanced -best estimate- thermal-hydraulic codes. Most of the methods identify and combine input uncertainties. The major differences between the predictions of the methods came from the choice of uncertain parameters and the quantification of the input uncertainties, i.e. the wideness of the uncertainty ranges. Therefore, suitable experimental and analytical information has to be selected to specify these uncertainty ranges or distributions. After the closure of the Uncertainty Method Study (UMS) and after the report was issued comparison calculations of experiment LSTF-SB-CL-18 were performed by University of Pisa using different versions of the RELAP 5 code. It turned out that the version used by two of the participants calculated a 170 K higher peak clad temperature compared with other versions using the same input deck. This may contribute to the differences of the upper limit of the uncertainty ranges.
Validation of Fuel Performance Uncertainty for RIA Safety Analysis
Energy Technology Data Exchange (ETDEWEB)
Park, Nam-Gyu; Yoo, Jong-Sung; Jung, Yil-Sup [KEPCO Nuclear Fuel Co., Daejeon (Korea, Republic of)
2016-10-15
To achieve this the computer code performance has to be validated based on the experimental results. And for the uncertainty quantification, important uncertainty parameters need to be selected, and combined uncertainty has to be evaluated with an acceptable statistical treatment. And important uncertainty parameters to the rod performance such as fuel enthalpy, fission gas release, cladding hoop strain etc. were chosen through the rigorous sensitivity studies. And their validity has been assessed by utilizing the experimental results, which were tested in CABRI and NSRR. Analysis results revealed that several tested rods were not bounded within combined fuel performance uncertainty. Assessment of fuel performance with an extended fuel power uncertainty on tested rods in NSRR and CABRI has been done. Analysis results showed that several tested rods were not bounded within calculated fuel performance uncertainty. This implies that the currently considered uncertainty range of the parameters is not enough to cover the fuel performance sufficiently.
Energy Technology Data Exchange (ETDEWEB)
Scapin, Marcos A.; Salvador, Vera L.R.; Cotrim, Marycel E.B.; Pires, Maria Ap. F.; Sato, Ivone M., E-mail: mascapin@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP) Sao Paulo, SP (Brazil). Centro de Quimica e Meio Ambiente
2009-07-01
Uranium silicide (U{sub 3}Si{sub 2}), 20% {sup 235}U enriched powder, is an intermetallic compound used as nuclear fuel material; that is the state-of-the-art among nuclear fuel materials used in modern research reactors. It is produced by IPEN and used as nuclear fuel of the IEA-R1 reactor (IPEN/CNEN, Sao Paulo, Brazil); U{sub 3}Si{sub 2} has 92.3%wtU and 7.7%wtSi. The qualification of this material requires chemical and physical tests such as Si and U{sub total} content, isotope ratio, impurities, density, specific surface area and particle size determination. The Si and U{sub total} determination were made by gravimetric and volumetric procedures at the Environment Chemistry Center (CQMA-IPEN/CNEN). Usually, these classical methods require a long time for analyses and are expensive. The objective of this study was to establish a fast and efficient analytical method to meet ISO/IEC 17025:2005 requirements in the Si and U{sub total} determination. The X-ray fluorescence techniques (XRF) were chosen to allow a direct and non-destructive testing, what is a principal advantage faced to other instrumental techniques, since previous chemical treatments are not necessary. In this study, the performance of the wavelength dispersive (WDXRF) and energy dispersive (EDXRF) X- ray fluorescence techniques was evaluated. Furthermore, two different sample preparation procedures, plain powdered and pressed powdered were evaluated. Statistical tools were used to evaluate the results and a comparison between these results and the conventional methods was done. (author)
Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling
Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.
2017-12-01
Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model
Directory of Open Access Journals (Sweden)
Renato S Assad
1994-09-01
Full Text Available O implante de marcapasso epicárdíco em fetos via toracotomia é um procedimento potencialmente mais seguro e eficaz para se tratar o bloqueio AV total congênito (BAVT, quando associado à hidropsia fetal e refratário ao tratamento clínico. Este estudo foi desenvolvido com o objetivo de avaliar as características eletrofisiológicas de dois eletrodos epicárdicos através de novo modelo experimental de BAVT congênito induzido pela crioablação do nó AV. Foram aplicados, em 2 grupos de 6 fetos de ovelhas (80% da gestação, um eletrodo de rosqueamento (1,5 voltas e outro de sutura epicárdica. O BAVT foi obtido em todos os fetos, não sendo observado nenhum ritmo de escape ventricular. Os limiares de estimulação foram baixos para ambos os eletrodos, com valores inferiores para o eletrodo de rosqueamento com largura de pulso abaixo de 0,9 mseg (p 0,20 na amplitude da onda R dos 2 eletrodos. O slew rate foi significativamente maior para o grupo de fetos com eletrodo de rosqueamento (1,40 ± 0,2 versus 0,62 ± 0,2 V/seg. p=0,04. O método é simples e reprodutível para avaliação do marcapasso fetal, sendo que o eletrodo de rosqueamento representa a melhor opção, quando houver indicação de implante de marcapasso em fetos.Epicardial fetal pacing via thoracotomy has the potential for being a safer and more reliable procedure to treat congenital complete heart block (CHB associated with fetal hydrops refractory to medical therapy. To assess the acute electrophysiologic characteristics of two ventricular epicardial leads, a new experimental model of fetal heart block induced by cryosurgical ablation of the AV node without the need for fetal cardiac bypass was performed in 12 pregnant ewes at 110-115 days of gestation. A modified screw-in lead (one and a half turn was used in 6 fetal lambs and a stitch-on lead in the other 6 lambs. CHB was achieved in 100% of the fetal lambs, with no ventricular escape rate noticed in any of the lambs
... page: //medlineplus.gov/ency/article/003483.htm Total protein To use the sharing features on this page, please enable JavaScript. The total protein test measures the total amount of two classes ...
International Nuclear Information System (INIS)
Kaul, Dean C.; Egbert, Stephen D.; Woolson, William A.
2005-01-01
In order to avoid the pitfalls that so discredited DS86 and its uncertainty estimates, and to provide DS02 uncertainties that are both defensible and credible, this report not only presents the ensemble uncertainties assembled from uncertainties in individual computational elements and radiation dose components but also describes how these relate to comparisons between observed and computed quantities at critical intervals in the computational process. These comparisons include those between observed and calculated radiation free-field components, where observations include thermal- and fast-neutron activation and gamma-ray thermoluminescence, which are relevant to the estimated systematic uncertainty for DS02. The comparisons also include those between calculated and observed survivor shielding, where the observations consist of biodosimetric measurements for individual survivors, which are relevant to the estimated random uncertainty for DS02. (J.P.N.)
Bitter, T.; Khan, I.; Marriott, T.; Lovelady, E.; Verdonschot, N.J.; Janssen, D.W.
2017-01-01
Fretting corrosion at the taper interface of modular hip implants has been implicated as a possible cause of implant failure. This study was set up to gain more insight in the taper mechanics that lead to fretting corrosion. The objectives of this study therefore were (1) to select experimental
Uncertainty vs. Information (Invited)
Nearing, Grey
2017-04-01
Information theory is the branch of logic that describes how rational epistemic states evolve in the presence of empirical data (Knuth, 2005), and any logic of science is incomplete without such a theory. Developing a formal philosophy of science that recognizes this fact results in essentially trivial solutions to several longstanding problems are generally considered intractable, including: • Alleviating the need for any likelihood function or error model. • Derivation of purely logical falsification criteria for hypothesis testing. • Specification of a general quantitative method for process-level model diagnostics. More generally, I make the following arguments: 1. Model evaluation should not proceed by quantifying and/or reducing error or uncertainty, and instead should be approached as a problem of ensuring that our models contain as much information as our experimental data. I propose that the latter is the only question a scientist actually has the ability to ask. 2. Instead of building geophysical models as solutions to differential equations that represent conservation laws, we should build models as maximum entropy distributions constrained by conservation symmetries. This will allow us to derive predictive probabilities directly from first principles. Knuth, K. H. (2005) 'Lattice duality: The origin of probability and entropy', Neurocomputing, 67, pp. 245-274.
Model uncertainty and probability
International Nuclear Information System (INIS)
Parry, G.W.
1994-01-01
This paper discusses the issue of model uncertainty. The use of probability as a measure of an analyst's uncertainty as well as a means of describing random processes has caused some confusion, even though the two uses are representing different types of uncertainty with respect to modeling a system. The importance of maintaining the distinction between the two types is illustrated with a simple example
Uncertainty in artificial intelligence
Kanal, LN
1986-01-01
How to deal with uncertainty is a subject of much controversy in Artificial Intelligence. This volume brings together a wide range of perspectives on uncertainty, many of the contributors being the principal proponents in the controversy.Some of the notable issues which emerge from these papers revolve around an interval-based calculus of uncertainty, the Dempster-Shafer Theory, and probability as the best numeric model for uncertainty. There remain strong dissenting opinions not only about probability but even about the utility of any numeric method in this context.
Uncertainties in hydrogen combustion
International Nuclear Information System (INIS)
Stamps, D.W.; Wong, C.C.; Nelson, L.S.
1988-01-01
Three important areas of hydrogen combustion with uncertainties are identified: high-temperature combustion, flame acceleration and deflagration-to-detonation transition, and aerosol resuspension during hydrogen combustion. The uncertainties associated with high-temperature combustion may affect at least three different accident scenarios: the in-cavity oxidation of combustible gases produced by core-concrete interactions, the direct containment heating hydrogen problem, and the possibility of local detonations. How these uncertainties may affect the sequence of various accident scenarios is discussed and recommendations are made to reduce these uncertainties. 40 references
Development of Property Models with Uncertainty Estimate for Process Design under Uncertainty
DEFF Research Database (Denmark)
Hukkerikar, Amol; Sarup, Bent; Abildskov, Jens
more reliable predictions with a new and improved set of model parameters for GC (group contribution) based and CI (atom connectivity index) based models and to quantify the uncertainties in the estimated property values from a process design point-of-view. This includes: (i) parameter estimation using....... The comparison of model prediction uncertainties with reported range of measurement uncertainties is presented for the properties with related available data. The application of the developed methodology to quantify the effect of these uncertainties on the design of different unit operations (distillation column......, the developed methodology can be used to quantify the sensitivity of process design to uncertainties in property estimates; obtain rationally the risk/safety factors in process design; and identify additional experimentation needs in order to reduce most critical uncertainties....
Sensitivity/uncertainty analysis for the Hiroshima dosimetry reevaluation effort
International Nuclear Information System (INIS)
Broadhead, B.L.; Lillie, R.A.; Pace, J.V. III; Cacuci, D.G.
1987-01-01
Uncertainty estimates and cross correlations by range/survivor location have been obtained for the free-in-air (FIA) tissue kerma for the Hiroshima atomic event. These uncertainties in the FIA kerma include contributions due to various modeling parameters and the basic cross section data and are given at three ground ranges, 700, 1000 and 1500 m. The estimated uncertainties are nearly constant over the given ground ranges and are approximately 27% for the prompt neutron kerma and secondary gamma kerma and 35% for the prompt gamma kerma. The total kerma uncertainty is dominated by the secondary gamma kerma uncertainties which are in turn largely due to the modeling parameter uncertainties
Treatment and reporting of uncertainties for environmental radiation measurements
International Nuclear Information System (INIS)
Colle, R.
1980-01-01
Recommendations for a practical and uniform method for treating and reporting uncertainties in environmental radiation measurements data are presented. The method requires that each reported measurement result include the value, a total propagated random uncertainty expressed as the standard deviation, and a combined overall uncertainty. The uncertainty assessment should be based on as nearly a complete assessment as possible and should include every conceivable or likely source of inaccuracy in the result. Guidelines are given for estimating random and systematic uncertainty components, and for propagating and combining them to form an overall uncertainty
Energy Technology Data Exchange (ETDEWEB)
Maschio, Celio; Risso, Fernanda V.A.; Schiozer, Denis J. [Universidade Estadual de Campinas (UNICAMP), SP (Brazil)
2008-07-01
The purpose of this work is to present a methodology of uncertainty mitigation through observed data obtained during petroleum field production. One step of the methodology consists of the uncertainty quantification through the derivative tree technique. Another step is the probability redistribution of the uncertain levels. The uncertainty quantification process through derivative tree can be unfeasible for cases with high number of attributes. In this context, for the uncertainty quantification, it is proposed the use of proxies, which are response surfaces obtained through statistical design, in order to reduce the number of simulations. Additionally, the weights of each attribute level considered in the probability redistribution are optimized. The methodology was applied in two reservoirs: a synthetic field with eight attributes and a modified real field with six critical attributes. The results have shown a good agreement among the uncertainty curves obtained through the response surface and those obtained through the simulations and significant reduction of the number of simulations by using proxies. The effect of the uncertainty reduction on the production prediction is also analyzed. (author)
Uncertainty in social dilemmas
Kwaadsteniet, Erik Willem de
2007-01-01
This dissertation focuses on social dilemmas, and more specifically, on environmental uncertainty in these dilemmas. Real-life social dilemma situations are often characterized by uncertainty. For example, fishermen mostly do not know the exact size of the fish population (i.e., resource size
Uncertainty and Climate Change
Berliner, L. Mark
2003-01-01
Anthropogenic, or human-induced, climate change is a critical issue in science and in the affairs of humankind. Though the target of substantial research, the conclusions of climate change studies remain subject to numerous uncertainties. This article presents a very brief review of the basic arguments regarding anthropogenic climate change with particular emphasis on uncertainty.
International Nuclear Information System (INIS)
Depres, B.; Dossantos-Uzarralde, P.
2009-01-01
More than 150 researchers and engineers from universities and the industrial world met to discuss on the new methodologies developed around assessing uncertainty. About 20 papers were presented and the main topics were: methods to study the propagation of uncertainties, sensitivity analysis, nuclear data covariances or multi-parameter optimisation. This report gathers the contributions of CEA researchers and engineers
Conditional uncertainty principle
Gour, Gilad; Grudka, Andrzej; Horodecki, Michał; Kłobus, Waldemar; Łodyga, Justyna; Narasimhachar, Varun
2018-04-01
We develop a general operational framework that formalizes the concept of conditional uncertainty in a measure-independent fashion. Our formalism is built upon a mathematical relation which we call conditional majorization. We define conditional majorization and, for the case of classical memory, we provide its thorough characterization in terms of monotones, i.e., functions that preserve the partial order under conditional majorization. We demonstrate the application of this framework by deriving two types of memory-assisted uncertainty relations, (1) a monotone-based conditional uncertainty relation and (2) a universal measure-independent conditional uncertainty relation, both of which set a lower bound on the minimal uncertainty that Bob has about Alice's pair of incompatible measurements, conditioned on arbitrary measurement that Bob makes on his own system. We next compare the obtained relations with their existing entropic counterparts and find that they are at least independent.
Physical Uncertainty Bounds (PUB)
Energy Technology Data Exchange (ETDEWEB)
Vaughan, Diane Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Dean L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.
Measurement uncertainty and probability
Willink, Robin
2013-01-01
A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.
Directory of Open Access Journals (Sweden)
Mohammad Reza Vahdad
2015-09-01
Full Text Available Background: The natural orifice transluminal endoscopic surgery (NOTES has become a commonly considered novel approach in the surgical field. The NOTES provide possibility of operation through the natural orifice and decreases the intentional puncture of the systemic organ and subsequent complications. Totally transanal laparo-endoscopic single-site proctoColectomy-Ileoanal J-Pouch (TLPC-J is a novel method in minimally invasive surgery for total colectomy. The main goal of this study is to perform this new method on an animal model, to assess probable complication and to resolve probable issues by using patients that are candidate for total colectomy. Method: Five dogs were prepared in lithotomy position. The TLPC-I procedure consists of endorectal technique with full thickness rectal dissection starting 1 cm orally from the dentate line above the peritoneal reflection and the proximal bowel was replaced into the abdominal cavity. Afterwards, the TriPort system was inserted in the anal canal and mesentrial resection of the total colon, mobilization of a distal ileal segment and intracorporeal suture of an ileal J-loop was accomplished by this system. An incision in the J-loop was conducted transanally. The J-pouch was created with an Endo-GIA® and sutured to the rectal wall. Results: All animals survived and passed stool with clear post operation situation. There was no infection in site of anastomosis. Conclusion: The TLPC-I provides the possibility of surgery without abdominal wall incision and decreases post operation complication such as pain, abdominal wound infection and wound dehiscence. This technique increases the quality of life and surgeons can discharge the patients early.
Tel, G.
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of
Procedure for statistical analysis of one-parameter discrepant experimental data
International Nuclear Information System (INIS)
Badikov, Sergey A.; Chechev, Valery P.
2012-01-01
A new, Mandel–Paule-type procedure for statistical processing of one-parameter discrepant experimental data is described. The procedure enables one to estimate a contribution of unrecognized experimental errors into the total experimental uncertainty as well as to include it in analysis. A definition of discrepant experimental data for an arbitrary number of measurements is introduced as an accompanying result. In the case of negligible unrecognized experimental errors, the procedure simply reduces to the calculation of the weighted average and its internal uncertainty. The procedure was applied to the statistical analysis of half-life experimental data; Mean half-lives for 20 actinides were calculated and results were compared to the ENSDF and DDEP evaluations. On the whole, the calculated half-lives are consistent with the ENSDF and DDEP evaluations. However, the uncertainties calculated in this work essentially exceed the ENSDF and DDEP evaluations for discrepant experimental data. This effect can be explained by adequately taking into account unrecognized experimental errors. - Highlights: ► A new statistical procedure for processing one-parametric discrepant experimental data has been presented. ► Procedure estimates a contribution of unrecognized errors in the total experimental uncertainty. ► Procedure was applied for processing half-life discrepant experimental data. ► Results of the calculations are compared to the ENSDF and DDEP evaluations.
Bitter, Thom; Khan, Imran; Marriott, Tim; Lovelady, Elaine; Verdonschot, Nico; Janssen, Dennis
2017-09-01
Fretting corrosion at the taper interface of modular hip implants has been implicated as a possible cause of implant failure. This study was set up to gain more insight in the taper mechanics that lead to fretting corrosion. The objectives of this study therefore were (1) to select experimental loading conditions to reproduce clinically relevant fretting corrosion features observed in retrieved components, (2) to develop a finite element model consistent with the fretting experiments and (3) to apply more complicated loading conditions of activities of daily living to the finite element model to study the taper mechanics. The experiments showed similar wear patterns on the taper surface as observed in retrievals. The finite element wear score based on Archard's law did not correlate well with the amount of material loss measured in the experiments. However, similar patterns were observed between the simulated micromotions and the experimental wear measurements. Although the finite element model could not be validated, the loading conditions based on activities of daily living demonstrate the importance of assembly load on the wear potential. These findings suggest that finite element models that do not incorporate geometry updates to account for wear loss may not be appropriate to predict wear volumes of taper connections.
International Nuclear Information System (INIS)
Zwermann, W.; Krzykacz-Hausmann, B.; Gallner, L.; Klein, M.; Pautz, A.; Velkov, K.
2012-01-01
Sampling based uncertainty and sensitivity analyses due to epistemic input uncertainties, i.e. to an incomplete knowledge of uncertain input parameters, can be performed with arbitrary application programs to solve the physical problem under consideration. For the description of steady-state particle transport, direct simulations of the microscopic processes with Monte Carlo codes are often used. This introduces an additional source of uncertainty, the aleatoric sampling uncertainty, which is due to the randomness of the simulation process performed by sampling, and which adds to the total combined output sampling uncertainty. So far, this aleatoric part of uncertainty is minimized by running a sufficiently large number of Monte Carlo histories for each sample calculation, thus making its impact negligible as compared to the impact from sampling the epistemic uncertainties. Obviously, this process may cause high computational costs. The present paper shows that in many applications reliable epistemic uncertainty results can also be obtained with substantially lower computational effort by performing and analyzing two appropriately generated series of samples with much smaller number of Monte Carlo histories each. The method is applied along with the nuclear data uncertainty and sensitivity code package XSUSA in combination with the Monte Carlo transport code KENO-Va to various critical assemblies and a full scale reactor calculation. It is shown that the proposed method yields output uncertainties and sensitivities equivalent to the traditional approach, with a high reduction of computing time by factors of the magnitude of 100. (authors)
Quantification of uncertainties of modeling and simulation
International Nuclear Information System (INIS)
Ma Zhibo; Yin Jianwei
2012-01-01
The principles of Modeling and Simulation (M and S) is interpreted by a functional relation, from which the total uncertainties of M and S are identified and sorted to three parts considered to vary along with the conceptual models' parameters. According to the idea of verification and validation, the space of the parameters is parted to verified and applied domains, uncertainties in the verified domain are quantified by comparison between numerical and standard results, and those in the applied domain are quantified by a newly developed extrapolating method. Examples are presented to demonstrate and qualify the ideas aimed to build a framework to quantify the uncertainties of M and S. (authors)
Uncertainty analysis of the FRAP code
International Nuclear Information System (INIS)
Peck, S.O.
1978-01-01
A user oriented, automated uncertainty analysis capability has been built into the Fuel Rod Analysis Program (FRAP) code and has been applied to a pressurized water reactor (PWR) fuel rod undergoing a loss-of-coolant accident (LOCA). The method of uncertainty analysis is the response surface method. The automated version significantly reduced the time required to complete the analysis and, at the same time, greatly increased the problem scope. Results of the analysis showed a significant difference in the total and relative contributions to the uncertainty of the response parameters between steady state and transient conditions
Evaluation of uncertainty of adaptive radiation therapy
International Nuclear Information System (INIS)
Garcia Molla, R.; Gomez Martin, C.; Vidueira, L.; Juan-Senabre, X.; Garcia Gomez, R.
2013-01-01
This work is part of tests to perform to its acceptance in the clinical practice. The uncertainties of adaptive radiation, and which will separate the study, can be divided into two large parts: dosimetry in the CBCT and RDI. At each stage, their uncertainties are quantified and a level of action from which it would be reasonable to adapt the plan may be obtained with the total. (Author)
Ascertaining the uncertainty relations via quantum correlations
International Nuclear Information System (INIS)
Li, Jun-Li; Du, Kun; Qiao, Cong-Feng
2014-01-01
We propose a new scheme to express the uncertainty principle in the form of inequality of the bipartite correlation functions for a given multipartite state, which provides an experimentally feasible and model-independent way to verify various uncertainty and measurement disturbance relations. By virtue of this scheme, the implementation of experimental measurement on the measurement disturbance relation to a variety of physical systems becomes practical. The inequality in turn, also imposes a constraint on the strength of correlation, i.e. it determines the maximum value of the correlation function for two-body system and a monogamy relation of the bipartite correlation functions for multipartite system. (paper)
Uncertainty Propagation in OMFIT
Smith, Sterling; Meneghini, Orso; Sung, Choongki
2017-10-01
A rigorous comparison of power balance fluxes and turbulent model fluxes requires the propagation of uncertainties in the kinetic profiles and their derivatives. Making extensive use of the python uncertainties package, the OMFIT framework has been used to propagate covariant uncertainties to provide an uncertainty in the power balance calculation from the ONETWO code, as well as through the turbulent fluxes calculated by the TGLF code. The covariant uncertainties arise from fitting 1D (constant on flux surface) density and temperature profiles and associated random errors with parameterized functions such as a modified tanh. The power balance and model fluxes can then be compared with quantification of the uncertainties. No effort is made at propagating systematic errors. A case study will be shown for the effects of resonant magnetic perturbations on the kinetic profiles and fluxes at the top of the pedestal. A separate attempt at modeling the random errors with Monte Carlo sampling will be compared to the method of propagating the fitting function parameter covariant uncertainties. Work supported by US DOE under DE-FC02-04ER54698, DE-FG2-95ER-54309, DE-SC 0012656.
Uncertainty analysis in Monte Carlo criticality computations
International Nuclear Information System (INIS)
Qi Ao
2011-01-01
Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.
International Nuclear Information System (INIS)
Gonzalez del Pino, J.; Benito, M.; Randolph, M.A.; Weiland, A.J.
1990-01-01
To evaluate the effects of irradiation on heterotopically placed vascularized knee isografts, a single dose of 10 Gy of total-body irradiation was given to Lewis donor rats. Irradiation was delivered either 2 or 6 days prior to harvesting or subsequent transplantation, and evaluated at 1, 2, and 4 weeks after grafting. Irradiation caused endothelial depopulation of the graft artery, although vascular pedicle patency was maintained throughout the study. Bone graft viability and mineralization were normal. Dramatic changes in the bone marrow were seen that included an increase of its fat content (P less than 0.001), and a concomitant decrease in bone marrow-derived immunocompetent cells. These changes were more prominent in recipients of grafts from day -6 irradiated donor rats. Total-body irradiation did not prejudice the use of vascularized bone grafts, and exhibited an associated immunosuppresant effect over the vascular endothelium and bone marrow. This may be a further rational conditioning procedure to avoid recipient manipulation in vascularized bone allotransplantation
A methodology for uncertainty analysis of reference equations of state
DEFF Research Database (Denmark)
Cheung, Howard; Frutiger, Jerome; Bell, Ian H.
We present a detailed methodology for the uncertainty analysis of reference equations of state (EOS) based on Helmholtz energy. In recent years there has been an increased interest in uncertainties of property data and process models of thermal systems. In the literature there are various...... for uncertainty analysis is suggested as a tool for EOS. The uncertainties of the EOS properties are calculated from the experimental values and the EOS model structure through the parameter covariance matrix and subsequent linear error propagation. This allows reporting the uncertainty range (95% confidence...
Scheele, Christian; Pietschmann, Matthias F; Schröder, Christian; Grupp, Thomas; Holderied, Melanie; Jansson, Volmar; Müller, Peter E
2017-03-01
Unicompartmental total knee arthroplasty (UKA) is a well-established treatment option for unicondylar osteoarthritis, and generally leads to better functional results than tricompartimental total knee arthroplasty (TKA). However, revision rates of UKAs are reported as being higher; a major reason for this is aseptic loosening of the tibial component due to implant-cement-bone interface fatigue. The objective of this study was to determine the effects of trabecular bone preparation, prior to implantation of tibial UKAs, on morphological and biomechanical outcomes in a cadaver study. Cemented UKAs were performed in 18 human cadaver knees after the bone bed was cleaned using pulsed lavage (Group A), conventional brush (Group B) or no cleaning at all (Group C, control). Morphologic cement penetration and primary stability were measured. The area proportion under the tibial component without visible cement penetration was significantly higher in Group C (21.9%, SD 11.9) than in both Group A (7.1%, SD 5.8), and Group B (6.5%, SD 4.2) (P=0.007). The overall cement penetration depth did not differ between groups. However, in the posterior part, cement penetration depth was significantly higher in Group B (1.9mm, SD 0.3) than in both Group A (1.3mm, SD 0.3) and Group C (1.4mm, SD 0.3) (P=0.015). The mode of preparation did not show a substantial effect on primary stability tested under dynamic compression-shear test conditions (P=0.910). Bone preparation significantly enhances cement interdigitation. The application of a brush shows similar results compared with the application of pulsed lavage. Copyright © 2016 Elsevier B.V. All rights reserved.
Evaluating prediction uncertainty
International Nuclear Information System (INIS)
McKay, M.D.
1995-03-01
The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented
International Nuclear Information System (INIS)
Limperopoulos, G.J.
1995-01-01
This report presents an oil project valuation under uncertainty by means of two well-known financial techniques: The Capital Asset Pricing Model (CAPM) and The Black-Scholes Option Pricing Formula. CAPM gives a linear positive relationship between expected rate of return and risk but does not take into consideration the aspect of flexibility which is crucial for an irreversible investment as an oil price is. Introduction of investment decision flexibility by using real options can increase the oil project value substantially. Some simple tests for the importance of uncertainty in stock market for oil investments are performed. Uncertainty in stock returns is correlated with aggregate product market uncertainty according to Pindyck (1991). The results of the tests are not satisfactory due to the short data series but introducing two other explanatory variables the interest rate and Gross Domestic Product make the situation better. 36 refs., 18 figs., 6 tabs
Passive active neutron radioassay measurement uncertainty for combustible and glass waste matrices
International Nuclear Information System (INIS)
Blackwood, L.G.; Harker, Y.D.; Meachum, T.R.; Yoon, Woo Y.
1997-01-01
Using a modified statistical sampling and verification approach, total uncertainty of INEL's Passive Active Neutron (PAN) radioassay system was evaluated for combustible and glass content codes. Waste structure and content of 100 randomly selected drums in each the waste categories were computer modeled based on review of real-time radiography video tapes. Specific quantities of Pu were added to the drum models according to an experimental design. These drum models were then submitted to the Monte Carlo Neutron Photon code processing and subsequent calculations to produce simulated PAN system measurements. The reported Pu masses from the simulation runs were compared with the corresponding input masses. Analysis of the measurement errors produced uncertainty estimates. This paper presents results of the uncertainty calculations and compares them to previous reported results obtained for graphite waste
Uncertainties and climatic change
International Nuclear Information System (INIS)
De Gier, A.M.; Opschoor, J.B.; Van de Donk, W.B.H.J.; Hooimeijer, P.; Jepma, J.; Lelieveld, J.; Oerlemans, J.; Petersen, A.
2008-01-01
Which processes in the climate system are misunderstood? How are scientists dealing with uncertainty about climate change? What will be done with the conclusions of the recently published synthesis report of the IPCC? These and other questions were answered during the meeting 'Uncertainties and climate change' that was held on Monday 26 November 2007 at the KNAW in Amsterdam. This report is a compilation of all the presentations and provides some conclusions resulting from the discussions during this meeting. [mk] [nl
Lemaire, Maurice
2014-01-01
Science is a quest for certainty, but lack of certainty is the driving force behind all of its endeavors. This book, specifically, examines the uncertainty of technological and industrial science. Uncertainty and Mechanics studies the concepts of mechanical design in an uncertain setting and explains engineering techniques for inventing cost-effective products. Though it references practical applications, this is a book about ideas and potential advances in mechanical science.
Uncertainty: lotteries and risk
Ávalos, Eloy
2011-01-01
In this paper we develop the theory of uncertainty in a context where the risks assumed by the individual are measurable and manageable. We primarily use the definition of lottery to formulate the axioms of the individual's preferences, and its representation through the utility function von Neumann - Morgenstern. We study the expected utility theorem and its properties, the paradoxes of choice under uncertainty and finally the measures of risk aversion with monetary lotteries.
Uncertainty calculations made easier
International Nuclear Information System (INIS)
Hogenbirk, A.
1994-07-01
The results are presented of a neutron cross section sensitivity/uncertainty analysis performed in a complicated 2D model of the NET shielding blanket design inside the ITER torus design, surrounded by the cryostat/biological shield as planned for ITER. The calculations were performed with a code system developed at ECN Petten, with which sensitivity/uncertainty calculations become relatively simple. In order to check the deterministic neutron transport calculations (performed with DORT), calculations were also performed with the Monte Carlo code MCNP. Care was taken to model the 2.0 cm wide gaps between two blanket segments, as the neutron flux behind the vacuum vessel is largely determined by neutrons streaming through these gaps. The resulting neutron flux spectra are in excellent agreement up to the end of the cryostat. It is noted, that at this position the attenuation of the neutron flux is about 1 l orders of magnitude. The uncertainty in the energy integrated flux at the beginning of the vacuum vessel and at the beginning of the cryostat was determined in the calculations. The uncertainty appears to be strongly dependent on the exact geometry: if the gaps are filled with stainless steel, the neutron spectrum changes strongly, which results in an uncertainty of 70% in the energy integrated flux at the beginning of the cryostat in the no-gap-geometry, compared to an uncertainty of only 5% in the gap-geometry. Therefore, it is essential to take into account the exact geometry in sensitivity/uncertainty calculations. Furthermore, this study shows that an improvement of the covariance data is urgently needed in order to obtain reliable estimates of the uncertainties in response parameters in neutron transport calculations. (orig./GL)
Owens, Tom
2006-01-01
This article presents an interview with James Howe, author of "The Misfits" and "Totally Joe". In this interview, Howe discusses tolerance, diversity and the parallels between his own life and his literature. Howe's four books in addition to "The Misfits" and "Totally Joe" and his list of recommended books with lesbian, gay, bisexual, transgender,…
Directory of Open Access Journals (Sweden)
Ilić Zoran
2003-01-01
Full Text Available In order to establish distribution of lead in different tissues of bean seed (seed coat, endosperm, embryo depending on seed mass, treated samples (seed by different concentration of Pb-acetate: 1O-5 M, 10-3M i 2x 10-2M. Depending on seed weight the samples derived in three groups: large (715g, middle (465g and small (280g. Each sample contained the same number of seeds. Concentration was determined by atomic absorber (Unicam 929. At highest Pb-acetate concentration (2x10-2M in seed with small total mass content of Pb was 1139μg/g, white in seed of 1052μg/g; in endosperm 580,6μg/g, middle 290,2μg/g and in second group 79,4μg/g. Similar pattern shows embryo but at die lower level of accumulation. On die basis of above presented results it could be concluded that concentration of Pb-acetate solution. Largest mass seed accumulate respectively less content of Pb in endosperm and embryo. Seed coat accumulated significant die larger amount of land probably embryo, in that way protects embryo. Therefore, larger bean seed are more convenient for planting in cases of potentially contamination by 1, but probably by other metals. .
Jégoux, Franck; Goyenvalle, Eric; Cognet, Ronan; Malard, Olivier; Moreau, Francoise; Daculsi, Guy; Aguado, Eric
2009-12-15
The bone tissue engineering models used today are still a long way from any oncologic application as immediate postimplantation irradiation would decrease their osteoinductive potential. The aim of this study was to reconstruct a segmental critical size defect in a weight-bearing bone irradiated after implantation. Six white New Zealand rabbits were immediately implanted with a biomaterial associating resorbable collagen membrane EZ(R) filled and micro-macroporous biphasic calcium phosphate granules (MBCP+(R)). After a daily schedule of radiation delivery, and within 4 weeks, a total autologous bone marrow (BM) graft was injected percutaneously into the center of the implant. All the animals were sacrificed at 16 weeks. Successful osseous colonization was found to have bridged the entire length of the defects. Identical distribution of bone ingrowth and residual ceramics at the different levels of the implant suggests that the BM graft plays an osteoinductive role in the center of the defect. Periosteum-like formation was observed at the periphery, with the collagen membrane most likely playing a role. This model succeeded in bridging a large segmental defect in weight-bearing bone with immediate postimplantation fractionated radiation delivery. This has significant implications for the bone tissue engineering approach to patients with cancer-related bone defects.
International Nuclear Information System (INIS)
Buchmann, L.; Azuma, R.E.; Barnes, C.A.; Humblet, J.; Langanke, K.
1997-01-01
Because of the critical importance of the 12 C/ 16 O ratio resulting from helium burning to the later evolution of massive stars, R-matrix fits have been made to the available angular distribution data from radiative α-capture and elastic α-scattering on 12 C to estimate the total 12 C(α,γ) 16 O rate at stellar energies. Largely primary data, i.e. energy-dependent differential cross sections, are used in the analysis with all relevant partial waves being fitted simultaneously (surface fits). It is shown that while the E1 part of the reaction is well constrained by a recent experiment on the β-delayed α-particle decay of 16 N only upper limits can be placed on the E2 ground state transition which we take conservatively as S E2 (300)<140 keV b. Monte-Carlo simulations were subsequently carried out to explore what kind of future data could lead to better restrictions on SE2(300). We find that improved elastic scattering data may be the best candidate for such restrictions while improving S(300) with new radiative capture data seems to be much more difficult. (orig.)
International Nuclear Information System (INIS)
Carlos, P.
1985-06-01
The present discussion is limited to a presentation of the most recent total photonuclear absorption experiments performed with real photons at intermediate energy, and more precisely in the region of nucleon resonances. The main sources of real photons are briefly reviewed and the experimental procedures used for total photonuclear absorption cross section measurements. The main results obtained below 140 MeV photon energy as well as above 2 GeV are recalled. The experimental study of total photonuclear absorption in the nuclear resonance region (140 MeV< E<2 GeV) is still at its beginning and some results are presented
Evaluation of the uncertainty in an EBT3 film dosimetry system utilizing net optical density.
Marroquin, Elsa Y León; Herrera González, José A; Camacho López, Miguel A; Barajas, José E Villarreal; García-Garduño, Olivia A
2016-09-08
Radiochromic film has become an important tool to verify dose distributions for intensity-modulated radiotherapy (IMRT) and quality assurance (QA) procedures. A new radiochromic film model, EBT3, has recently become available, whose composition and thickness of the sensitive layer are the same as those of previous EBT2 films. However, a matte polyester layer was added to EBT3 to prevent the formation of Newton's rings. Furthermore, the symmetrical design of EBT3 allows the user to eliminate side-orientation dependence. This film and the flatbed scanner, Epson Perfection V750, form a dosimetry system whose intrinsic characteristics were studied in this work. In addition, uncertainties associated with these intrinsic characteristics and the total uncertainty of the dosimetry system were determined. The analysis of the response of the radiochromic film (net optical density) and the fitting of the experimental data to a potential function yielded an uncertainty of 2.6%, 4.3%, and 4.1% for the red, green, and blue channels, respectively. In this work, the dosimetry system presents an uncertainty in resolving the dose of 1.8% for doses greater than 0.8 Gy and less than 6 Gy for red channel. The films irradiated between 0 and 120 Gy show differences in the response when scanned in portrait or landscape mode; less uncertainty was found when using the portrait mode. The response of the film depended on the position on the bed of the scanner, contributing an uncertainty of 2% for the red, 3% for the green, and 4.5% for the blue when placing the film around the center of the bed of scanner. Furthermore, the uniformity and reproducibility radiochromic film and reproducibility of the response of the scanner contribute less than 1% to the overall uncertainty in dose. Finally, the total dose uncertainty was 3.2%, 4.9%, and 5.2% for red, green, and blue channels, respectively. The above uncertainty values were obtained by mini-mizing the contribution to the total dose uncertainty
Evaluation of the uncertainty in an EBT3 film dosimetry system utilizing net optical density
Marroquin, Elsa Y. León; Herrera González, José A.; Camacho López, Miguel A.; Barajas, José E. Villarreal
2016-01-01
Radiochromic film has become an important tool to verify dose distributions for intensity‐modulated radiotherapy (IMRT) and quality assurance (QA) procedures. A new radiochromic film model, EBT3, has recently become available, whose composition and thickness of the sensitive layer are the same as those of previous EBT2 films. However, a matte polyester layer was added to EBT3 to prevent the formation of Newton's rings. Furthermore, the symmetrical design of EBT3 allows the user to eliminate side‐orientation dependence. This film and the flatbed scanner, Epson Perfection V750, form a dosimetry system whose intrinsic characteristics were studied in this work. In addition, uncertainties associated with these intrinsic characteristics and the total uncertainty of the dosimetry system were determined. The analysis of the response of the radiochromic film (net optical density) and the fitting of the experimental data to a potential function yielded an uncertainty of 2.6%, 4.3%, and 4.1% for the red, green, and blue channels, respectively. In this work, the dosimetry system presents an uncertainty in resolving the dose of 1.8% for doses greater than 0.8 Gy and less than 6 Gy for red channel. The films irradiated between 0 and 120 Gy show differences in the response when scanned in portrait or landscape mode; less uncertainty was found when using the portrait mode. The response of the film depended on the position on the bed of the scanner, contributing an uncertainty of 2% for the red, 3% for the green, and 4.5% for the blue when placing the film around the center of the bed of scanner. Furthermore, the uniformity and reproducibility radiochromic film and reproducibility of the response of the scanner contribute less than 1% to the overall uncertainty in dose. Finally, the total dose uncertainty was 3.2%, 4.9%, and 5.2% for red, green, and blue channels, respectively. The above uncertainty values were obtained by minimizing the contribution to the total dose
Similarity and uncertainty analysis of the ALLEGRO MOX core
International Nuclear Information System (INIS)
Vrban, B.; Hascik, J.; Necas, V.; Slugen, V.
2015-01-01
The similarity and uncertainty analysis of the ESNII+ ALLEGRO MOX core has identified specific problems and challenges in the field of neutronic calculations. Similarity assessment identified 9 partly comparable experiments where only one reached ck and E values over 0.9. However the Global Integral Index G remains still low (0.75) and cannot be judge das sufficient. The total uncertainty of calculated k eff induced by XS data is according to our calculation 1.04%. The main contributors to this uncertainty are 239 Pu nubar and 238 U inelastic scattering. The additional margin from uncovered sensitivities was determined to be 0.28%. The identified low number of similar experiments prevents the use of advanced XS adjustment and bias estimation methods. More experimental data are needed and presented results may serve as a basic step in development of necessary critical assemblies. Although exact data are not presented in the paper, faster 44 energy group calculation gives almost the same results in similarity analysis in comparison to more complex 238 group calculation. Finally, it was demonstrated that TSUNAMI-IP utility can play a significant role in the future fast reactor development in Slovakia and in the Visegrad region. Clearly a further Research and Development and strong effort should be carried out in order to receive more complex methodology consisting of more plausible covariance data and related quantities. (authors)
Assessing student understanding of measurement and uncertainty
Jirungnimitsakul, S.; Wattanakasiwich, P.
2017-09-01
The objectives of this study were to develop and assess student understanding of measurement and uncertainty. A test has been adapted and translated from the Laboratory Data Analysis Instrument (LDAI) test, consists of 25 questions focused on three topics including measures of central tendency, experimental errors and uncertainties, and fitting regression lines. The test was evaluated its content validity by three physics experts in teaching physics laboratory. In the pilot study, Thai LDAI was administered to 93 freshmen enrolled in a fundamental physics laboratory course. The final draft of the test was administered to three groups—45 freshmen taking fundamental physics laboratory, 16 sophomores taking intermediated physics laboratory and 21 juniors taking advanced physics laboratory at Chiang Mai University. As results, we found that the freshmen had difficulties in experimental errors and uncertainties. Most students had problems with fitting regression lines. These results will be used to improve teaching and learning physics laboratory for physics students in the department.
International Nuclear Information System (INIS)
Wulff, W.; Boyack, B.E.; Duffey, R.B.
1988-01-01
Comparisons of results from TRAC-PF1/MOD1 code calculations with measurements from Separate Effects Tests, and published experimental data for modeling parameters have been used to determine the uncertainty ranges of code input and modeling parameters which dominate the uncertainty in predicting the Peak Clad Temperature for a postulated Large Break Loss of Coolant Accident (LBLOCA) in a four-loop Westinghouse Pressurized Water Reactor. The uncertainty ranges are used for a detailed statistical analysis to calculate the probability distribution function for the TRAC code-predicted Peak Clad Temperature, as is described in an attendant paper. Measurements from Separate Effects Tests and Integral Effects Tests have been compared with results from corresponding TRAC-PF1/MOD1 code calculations to determine globally the total uncertainty in predicting the Peak Clad Temperature for LBLOCAs. This determination is in support of the detailed statistical analysis mentioned above. The analyses presented here account for uncertainties in input parameters, in modeling and scaling, in computing and in measurements. The analyses are an important part of the work needed to implement the Code Scalability, Applicability and Uncertainty (CSAU) methodology. CSAU is needed to determine the suitability of a computer code for reactor safety analyses and the uncertainty in computer predictions. The results presented here are used to estimate the safety margin of a particular nuclear reactor power plant for a postulated accident. 25 refs., 10 figs., 11 tabs
Methodology for characterizing modeling and discretization uncertainties in computational simulation
Energy Technology Data Exchange (ETDEWEB)
ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.
2000-03-01
This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.
Modeling Uncertainty in Climate Change: A Multi-Model Comparison
Energy Technology Data Exchange (ETDEWEB)
Gillingham, Kenneth; Nordhaus, William; Anthoff, David; Blanford, Geoffrey J.; Bosetti, Valentina; Christensen, Peter; McJeon, Haewon C.; Reilly, J. M.; Sztorc, Paul
2015-10-01
The economics of climate change involves a vast array of uncertainties, complicating both the analysis and development of climate policy. This study presents the results of the first comprehensive study of uncertainty in climate change using multiple integrated assessment models. The study looks at model and parametric uncertainties for population, total factor productivity, and climate sensitivity and estimates the pdfs of key output variables, including CO_{2} concentrations, temperature, damages, and the social cost of carbon (SCC). One key finding is that parametric uncertainty is more important than uncertainty in model structure. Our resulting pdfs also provide insight on tail events.
Optical Model and Cross Section Uncertainties
Energy Technology Data Exchange (ETDEWEB)
Herman,M.W.; Pigni, M.T.; Dietrich, F.S.; Oblozinsky, P.
2009-10-05
Distinct minima and maxima in the neutron total cross section uncertainties were observed in model calculations using spherical optical potential. We found this oscillating structure to be a general feature of quantum mechanical wave scattering. Specifically, we analyzed neutron interaction with 56Fe from 1 keV up to 65 MeV, and investigated physical origin of the minima.We discuss their potential importance for practical applications as well as the implications for the uncertainties in total and absorption cross sections.
Uncertainty and validation. Effect of model complexity on uncertainty estimates
International Nuclear Information System (INIS)
Elert, M.
1996-09-01
In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root
Energy Technology Data Exchange (ETDEWEB)
NONE
1992-06-01
In connection with the R and D of a deep sea bottom manganese nodule mining system, the results were summarized for the total system (experimental system) carried out in fiscal 1991. In the experimental system operation section, adjustment and examination were made on the issues and problems concerning design and manufacturing over each sub system continuously from the previous year, with the detailed design of the entire experimental system compounded. In addition, as an oceanic general experimental plan, consistency was confirmed between the development of the detailed design of the experimental system and the conditions of the area of the sea where the experiment was to be performed, with the points looked over for the implementation of the experiment, and with the behavior examined of the mining machine at the time of landing on the sea bottom in the general experiment simulation. Further, taking it into consideration that the boat for the mining experiment was a type of boat for oil rig and that its steering was dependent on towing by a tugboat, examination was started on the boat steering simulation at the time of tug towing. Furthermore, detailed design was started for the experiment system on the basis of the oil rig, with examination made on the principal particulars, weight, center of gravity, etc.. (NEDO)
Dealing with exploration uncertainties
International Nuclear Information System (INIS)
Capen, E.
1992-01-01
Exploration for oil and gas should fulfill the most adventurous in their quest for excitement and surprise. This paper tries to cover that tall order. The authors will touch on the magnitude of the uncertainty (which is far greater than in most other businesses), the effects of not knowing target sizes very well, how to build uncertainty into analyses naturally, how to tie reserves and chance estimates to economics, and how to look at the portfolio effect of an exploration program. With no apologies, the authors will be using a different language for some readers - the language of uncertainty, which means probability and statistics. These tools allow one to combine largely subjective exploration information with the more analytical data from the engineering and economic side
Uncertainty and validation. Effect of model complexity on uncertainty estimates
Energy Technology Data Exchange (ETDEWEB)
Elert, M. [Kemakta Konsult AB, Stockholm (Sweden)] [ed.
1996-09-01
In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root
Uncertainty in artificial intelligence
Levitt, TS; Lemmer, JF; Shachter, RD
1990-01-01
Clearly illustrated in this volume is the current relationship between Uncertainty and AI.It has been said that research in AI revolves around five basic questions asked relative to some particular domain: What knowledge is required? How can this knowledge be acquired? How can it be represented in a system? How should this knowledge be manipulated in order to provide intelligent behavior? How can the behavior be explained? In this volume, all of these questions are addressed. From the perspective of the relationship of uncertainty to the basic questions of AI, the book divides naturally i
Sensitivity and uncertainty analysis
Cacuci, Dan G; Navon, Ionel Michael
2005-01-01
As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c
International Nuclear Information System (INIS)
Orellana Salas, A.; Melgar Perez, J.; Arrocha Acevedo, J. F.
2013-01-01
The determination of the activity to prescribe the hyperthyroid patients presented difficult consideration uncertainties. The uncertainties associated with the experimental design can exceed 20%, so it should be valued to customize activity therapy of 1 31 I. (Author)
Uncertainties in repository modeling
Energy Technology Data Exchange (ETDEWEB)
Wilson, J.R.
1996-12-31
The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling.
Uncertainties in repository modeling
International Nuclear Information System (INIS)
Wilson, J.R.
1996-01-01
The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling
International Nuclear Information System (INIS)
Haefele, W.; Renn, O.; Erdmann, G.
1990-01-01
The notion of 'risk' is discussed in its social and technological contexts, leading to an investigation of the terms factuality, hypotheticality, uncertainty, and vagueness, and to the problems of acceptance and acceptability especially in the context of political decision finding. (DG) [de
Brzozek, Christopher; Benke, Kurt K; Zeleke, Berihun M; Abramson, Michael J; Benke, Geza
2018-03-26
Uncertainty in experimental studies of exposure to radiation from mobile phones has in the past only been framed within the context of statistical variability. It is now becoming more apparent to researchers that epistemic or reducible uncertainties can also affect the total error in results. These uncertainties are derived from a wide range of sources including human error, such as data transcription, model structure, measurement and linguistic errors in communication. The issue of epistemic uncertainty is reviewed and interpreted in the context of the MoRPhEUS, ExPOSURE and HERMES cohort studies which investigate the effect of radiofrequency electromagnetic radiation from mobile phones on memory performance. Research into this field has found inconsistent results due to limitations from a range of epistemic sources. Potential analytic approaches are suggested based on quantification of epistemic error using Monte Carlo simulation. It is recommended that future studies investigating the relationship between radiofrequency electromagnetic radiation and memory performance pay more attention to treatment of epistemic uncertainties as well as further research into improving exposure assessment. Use of directed acyclic graphs is also encouraged to display the assumed covariate relationship.
Directory of Open Access Journals (Sweden)
Christopher Brzozek
2018-03-01
Full Text Available Uncertainty in experimental studies of exposure to radiation from mobile phones has in the past only been framed within the context of statistical variability. It is now becoming more apparent to researchers that epistemic or reducible uncertainties can also affect the total error in results. These uncertainties are derived from a wide range of sources including human error, such as data transcription, model structure, measurement and linguistic errors in communication. The issue of epistemic uncertainty is reviewed and interpreted in the context of the MoRPhEUS, ExPOSURE and HERMES cohort studies which investigate the effect of radiofrequency electromagnetic radiation from mobile phones on memory performance. Research into this field has found inconsistent results due to limitations from a range of epistemic sources. Potential analytic approaches are suggested based on quantification of epistemic error using Monte Carlo simulation. It is recommended that future studies investigating the relationship between radiofrequency electromagnetic radiation and memory performance pay more attention to treatment of epistemic uncertainties as well as further research into improving exposure assessment. Use of directed acyclic graphs is also encouraged to display the assumed covariate relationship.
Conclusions on measurement uncertainty in microbiology.
Forster, Lynne I
2009-01-01
Since its first issue in 1999, testing laboratories wishing to comply with all the requirements of ISO/IEC 17025 have been collecting data for estimating uncertainty of measurement for quantitative determinations. In the microbiological field of testing, some debate has arisen as to whether uncertainty needs to be estimated for each method performed in the laboratory for each type of sample matrix tested. Queries also arise concerning the estimation of uncertainty when plate/membrane filter colony counts are below recommended method counting range limits. A selection of water samples (with low to high contamination) was tested in replicate with the associated uncertainty of measurement being estimated from the analytical results obtained. The analyses performed on the water samples included total coliforms, fecal coliforms, fecal streptococci by membrane filtration, and heterotrophic plate counts by the pour plate technique. For those samples where plate/membrane filter colony counts were > or =20, uncertainty estimates at a 95% confidence level were very similar for the methods, being estimated as 0.13, 0.14, 0.14, and 0.12, respectively. For those samples where plate/membrane filter colony counts were <20, estimated uncertainty values for each sample showed close agreement with published confidence limits established using a Poisson distribution approach.
Uncertainties in the proton lifetime
International Nuclear Information System (INIS)
Ellis, J.; Nanopoulos, D.V.; Rudaz, S.; Gaillard, M.K.
1980-04-01
We discuss the masses of the leptoquark bosons m(x) and the proton lifetime in Grand Unified Theories based principally on SU(5). It is emphasized that estimates of m(x) based on the QCD coupling and the fine structure constant are probably more reliable than those using the experimental value of sin 2 theta(w). Uncertainties in the QCD Λ parameter and the correct value of α are discussed. We estimate higher order effects on the evolution of coupling constants in a momentum space renormalization scheme. It is shown that increasing the number of generations of fermions beyond the minimal three increases m(X) by almost a factor of 2 per generation. Additional uncertainties exist for each generation of technifermions that may exist. We discuss and discount the possibility that proton decay could be 'Cabibbo-rotated' away, and a speculation that Lorentz invariance may be violated in proton decay at a detectable level. We estimate that in the absence of any substantial new physics beyond that in the minimal SU(5) model the proton lifetimes is 8 x 10 30+-2 years
Price Uncertainty in Linear Production Situations
Suijs, J.P.M.
1999-01-01
This paper analyzes linear production situations with price uncertainty, and shows that the corrresponding stochastic linear production games are totally balanced. It also shows that investment funds, where investors pool their individual capital for joint investments in financial assets, fit into
Courtney, H; Kirkland, J; Viguerie, P
1997-01-01
At the heart of the traditional approach to strategy lies the assumption that by applying a set of powerful analytic tools, executives can predict the future of any business accurately enough to allow them to choose a clear strategic direction. But what happens when the environment is so uncertain that no amount of analysis will allow us to predict the future? What makes for a good strategy in highly uncertain business environments? The authors, consultants at McKinsey & Company, argue that uncertainty requires a new way of thinking about strategy. All too often, they say, executives take a binary view: either they underestimate uncertainty to come up with the forecasts required by their companies' planning or capital-budging processes, or they overestimate it, abandon all analysis, and go with their gut instinct. The authors outline a new approach that begins by making a crucial distinction among four discrete levels of uncertainty that any company might face. They then explain how a set of generic strategies--shaping the market, adapting to it, or reserving the right to play at a later time--can be used in each of the four levels. And they illustrate how these strategies can be implemented through a combination of three basic types of actions: big bets, options, and no-regrets moves. The framework can help managers determine which analytic tools can inform decision making under uncertainty--and which cannot. At a broader level, it offers executives a discipline for thinking rigorously and systematically about uncertainty and its implications for strategy.
Sensitivity and uncertainty analysis for fission product decay heat calculations
International Nuclear Information System (INIS)
Rebah, J.; Lee, Y.K.; Nimal, J.C.; Nimal, B.; Luneville, L.; Duchemin, B.
1994-01-01
The calculated uncertainty in decay heat due to the uncertainty in basic nuclear data given in the CEA86 Library, is presented. Uncertainties in summation calculation arise from several sources: fission product yields, half-lives and average decay energies. The correlation between basic data is taken into account. The uncertainty analysis were obtained for thermal-neutron-induced fission of U235 and Pu239 in the case of burst fission and irradiation time. The calculated decay heat in this study is compared with experimental results and with new calculation using the JEF2 Library. (from authors) 6 figs., 19 refs
A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report
Energy Technology Data Exchange (ETDEWEB)
Campos, E [Argonne National Lab. (ANL), Argonne, IL (United States); Sisterson, Douglas [Argonne National Lab. (ANL), Argonne, IL (United States)
2016-12-01
The U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility is observationally based, and quantifying the uncertainty of its measurements is critically important. With over 300 widely differing instruments providing over 2,500 datastreams, concise expression of measurement uncertainty is quite challenging. The ARM Facility currently provides data and supporting metadata (information about the data or data quality) to its users through a number of sources. Because the continued success of the ARM Facility depends on the known quality of its measurements, the Facility relies on instrument mentors and the ARM Data Quality Office (DQO) to ensure, assess, and report measurement quality. Therefore, an easily accessible, well-articulated estimate of ARM measurement uncertainty is needed. Note that some of the instrument observations require mathematical algorithms (retrievals) to convert a measured engineering variable into a useful geophysical measurement. While those types of retrieval measurements are identified, this study does not address particular methods for retrieval uncertainty. As well, the ARM Facility also provides engineered data products, or value-added products (VAPs), based on multiple instrument measurements. This study does not include uncertainty estimates for those data products. We propose here that a total measurement uncertainty should be calculated as a function of the instrument uncertainty (calibration factors), the field uncertainty (environmental factors), and the retrieval uncertainty (algorithm factors). The study will not expand on methods for computing these uncertainties. Instead, it will focus on the practical identification, characterization, and inventory of the measurement uncertainties already available in the ARM community through the ARM instrument mentors and their ARM instrument handbooks. As a result, this study will address the first steps towards reporting ARM measurement uncertainty
Analysis of uncertainties of thermal hydraulic calculations
International Nuclear Information System (INIS)
Macek, J.; Vavrin, J.
2002-12-01
In 1993-1997 it was proposed, within OECD projects, that a common program should be set up for uncertainty analysis by a probabilistic method based on a non-parametric statistical approach for system computer codes such as RELAP, ATHLET and CATHARE and that a method should be developed for statistical analysis of experimental databases for the preparation of the input deck and statistical analysis of the output calculation results. Software for such statistical analyses would then have to be processed as individual tools independent of the computer codes used for the thermal hydraulic analysis and programs for uncertainty analysis. In this context, a method for estimation of a thermal hydraulic calculation is outlined and selected methods of statistical analysis of uncertainties are described, including methods for prediction accuracy assessment based on the discrete Fourier transformation principle. (author)
International Nuclear Information System (INIS)
Carlos, P.
1985-01-01
Experimental methods using real photon beams for measurements of total photonuclear absorption cross section σ(Tot : E/sub γ/) are recalled. Most recent σ(Tot : E/sub γ/)results for complex nuclei and in the nucleon resonance region are presented
Uncertainty in adaptive capacity
International Nuclear Information System (INIS)
Neil Adger, W.; Vincent, K.
2005-01-01
The capacity to adapt is a critical element of the process of adaptation: it is the vector of resources that represent the asset base from which adaptation actions can be made. Adaptive capacity can in theory be identified and measured at various scales, from the individual to the nation. The assessment of uncertainty within such measures comes from the contested knowledge domain and theories surrounding the nature of the determinants of adaptive capacity and the human action of adaptation. While generic adaptive capacity at the national level, for example, is often postulated as being dependent on health, governance and political rights, and literacy, and economic well-being, the determinants of these variables at national levels are not widely understood. We outline the nature of this uncertainty for the major elements of adaptive capacity and illustrate these issues with the example of a social vulnerability index for countries in Africa. (authors)
International Nuclear Information System (INIS)
Laval, Katia; Laval, Guy
2013-01-01
Like meteorology, climatology is not an exact science: climate change forecasts necessarily include a share of uncertainty. It is precisely this uncertainty which is brandished and exploited by the opponents to the global warming theory to put into question the estimations of its future consequences. Is it legitimate to predict the future using the past climate data (well documented up to 100000 years BP) or the climates of other planets, taking into account the impreciseness of the measurements and the intrinsic complexity of the Earth's machinery? How is it possible to model a so huge and interwoven system for which any exact description has become impossible? Why water and precipitations play such an important role in local and global forecasts, and how should they be treated? This book written by two physicists answers with simpleness these delicate questions in order to give anyone the possibility to build his own opinion about global warming and the need to act rapidly
Uncertainty information in climate data records from Earth observation
Merchant, Christopher J.; Paul, Frank; Popp, Thomas; Ablain, Michael; Bontemps, Sophie; Defourny, Pierre; Hollmann, Rainer; Lavergne, Thomas; Laeng, Alexandra; de Leeuw, Gerrit; Mittaz, Jonathan; Poulsen, Caroline; Povey, Adam C.; Reuter, Max; Sathyendranath, Shubha; Sandven, Stein; Sofieva, Viktoria F.; Wagner, Wolfgang
2017-07-01
The question of how to derive and present uncertainty information in climate data records (CDRs) has received sustained attention within the European Space Agency Climate Change Initiative (CCI), a programme to generate CDRs addressing a range of essential climate variables (ECVs) from satellite data. Here, we review the nature, mathematics, practicalities, and communication of uncertainty information in CDRs from Earth observations. This review paper argues that CDRs derived from satellite-based Earth observation (EO) should include rigorous uncertainty information to support the application of the data in contexts such as policy, climate modelling, and numerical weather prediction reanalysis. Uncertainty, error, and quality are distinct concepts, and the case is made that CDR products should follow international metrological norms for presenting quantified uncertainty. As a baseline for good practice, total standard uncertainty should be quantified per datum in a CDR, meaning that uncertainty estimates should clearly discriminate more and less certain data. In this case, flags for data quality should not duplicate uncertainty information, but instead describe complementary information (such as the confidence in the uncertainty estimate provided or indicators of conditions violating the retrieval assumptions). The paper discusses the many sources of error in CDRs, noting that different errors may be correlated across a wide range of timescales and space scales. Error effects that contribute negligibly to the total uncertainty in a single-satellite measurement can be the dominant sources of uncertainty in a CDR on the large space scales and long timescales that are highly relevant for some climate applications. For this reason, identifying and characterizing the relevant sources of uncertainty for CDRs is particularly challenging. The characterization of uncertainty caused by a given error effect involves assessing the magnitude of the effect, the shape of the
International Nuclear Information System (INIS)
Martens, Hans.
1991-01-01
The subject of this thesis is the uncertainty principle (UP). The UP is one of the most characteristic points of differences between quantum and classical mechanics. The starting point of this thesis is the work of Niels Bohr. Besides the discussion the work is also analyzed. For the discussion of the different aspects of the UP the formalism of Davies and Ludwig is used instead of the more commonly used formalism of Neumann and Dirac. (author). 214 refs.; 23 figs
Uncertainty in artificial intelligence
Shachter, RD; Henrion, M; Lemmer, JF
1990-01-01
This volume, like its predecessors, reflects the cutting edge of research on the automation of reasoning under uncertainty.A more pragmatic emphasis is evident, for although some papers address fundamental issues, the majority address practical issues. Topics include the relations between alternative formalisms (including possibilistic reasoning), Dempster-Shafer belief functions, non-monotonic reasoning, Bayesian and decision theoretic schemes, and new inference techniques for belief nets. New techniques are applied to important problems in medicine, vision, robotics, and natural language und
Decision Making Under Uncertainty
2010-11-01
A sound approach to rational decision making requires a decision maker to establish decision objectives, identify alternatives, and evaluate those...often violate the axioms of rationality when making decisions under uncertainty. The systematic description of such observations may lead to the...which leads to “anchoring” on the initial value. The fact that individuals have been shown to deviate from rationality when making decisions
Economic uncertainty principle?
Alexander Harin
2006-01-01
The economic principle of (hidden) uncertainty is presented. New probability formulas are offered. Examples of solutions of three types of fundamental problems are reviewed.; Principe d'incertitude économique? Le principe économique d'incertitude (cachée) est présenté. De nouvelles formules de chances sont offertes. Les exemples de solutions des trois types de problèmes fondamentaux sont reconsidérés.
Participation under Uncertainty
International Nuclear Information System (INIS)
Boudourides, Moses A.
2003-01-01
This essay reviews a number of theoretical perspectives about uncertainty and participation in the present-day knowledge-based society. After discussing the on-going reconfigurations of science, technology and society, we examine how appropriate for policy studies are various theories of social complexity. Post-normal science is such an example of a complexity-motivated approach, which justifies civic participation as a policy response to an increasing uncertainty. But there are different categories and models of uncertainties implying a variety of configurations of policy processes. A particular role in all of them is played by expertise whose democratization is an often-claimed imperative nowadays. Moreover, we discuss how different participatory arrangements are shaped into instruments of policy-making and framing regulatory processes. As participation necessitates and triggers deliberation, we proceed to examine the role and the barriers of deliberativeness. Finally, we conclude by referring to some critical views about the ultimate assumptions of recent European policy frameworks and the conceptions of civic participation and politicization that they invoke
Uncertainty analysis techniques
International Nuclear Information System (INIS)
Marivoet, J.; Saltelli, A.; Cadelli, N.
1987-01-01
The origin of the uncertainty affecting Performance Assessments, as well as their propagation to dose and risk results is discussed. The analysis is focused essentially on the uncertainties introduced by the input parameters, the values of which may range over some orders of magnitude and may be given as probability distribution function. The paper briefly reviews the existing sampling techniques used for Monte Carlo simulations and the methods for characterizing the output curves, determining their convergence and confidence limits. Annual doses, expectation values of the doses and risks are computed for a particular case of a possible repository in clay, in order to illustrate the significance of such output characteristics as the mean, the logarithmic mean and the median as well as their ratios. The report concludes that provisionally, due to its better robustness, such estimation as the 90th percentile may be substituted to the arithmetic mean for comparison of the estimated doses with acceptance criteria. In any case, the results obtained through Uncertainty Analyses must be interpreted with caution as long as input data distribution functions are not derived from experiments reasonably reproducing the situation in a well characterized repository and site
Deterministic uncertainty analysis
International Nuclear Information System (INIS)
Worley, B.A.
1987-12-01
This paper presents a deterministic uncertainty analysis (DUA) method for calculating uncertainties that has the potential to significantly reduce the number of computer runs compared to conventional statistical analysis. The method is based upon the availability of derivative and sensitivity data such as that calculated using the well known direct or adjoint sensitivity analysis techniques. Formation of response surfaces using derivative data and the propagation of input probability distributions are discussed relative to their role in the DUA method. A sample problem that models the flow of water through a borehole is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. Propogation of uncertainties by the DUA method is compared for ten cases in which the number of reference model runs was varied from one to ten. The DUA method gives a more accurate representation of the true cumulative distribution of the flow rate based upon as few as two model executions compared to fifty model executions using a statistical approach. 16 refs., 4 figs., 5 tabs
Estimating uncertainty of inference for validation
Energy Technology Data Exchange (ETDEWEB)
Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM
2010-09-30
We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the
Uncertainty Quantification in High Throughput Screening ...
Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of biochemical and cellular processes, including endocrine disruption, cytotoxicity, and zebrafish development. Over 2.6 million concentration response curves are fit to models to extract parameters related to potency and efficacy. Models built on ToxCast results are being used to rank and prioritize the toxicological risk of tested chemicals and to predict the toxicity of tens of thousands of chemicals not yet tested in vivo. However, the data size also presents challenges. When fitting the data, the choice of models, model selection strategy, and hit call criteria must reflect the need for computational efficiency and robustness, requiring hard and somewhat arbitrary cutoffs. When coupled with unavoidable noise in the experimental concentration response data, these hard cutoffs cause uncertainty in model parameters and the hit call itself. The uncertainty will then propagate through all of the models built on the data. Left unquantified, this uncertainty makes it difficult to fully interpret the data for risk assessment. We used bootstrap resampling methods to quantify the uncertainty in fitting models to the concentration response data. Bootstrap resampling determines confidence intervals for
Directory of Open Access Journals (Sweden)
Lopez Moris E
2016-06-01
Full Text Available Total thyroidectomy is a surgery that removes all the thyroid tissue from the patient. The suspect of cancer in a thyroid nodule is the most frequent indication and it is presume when previous fine needle puncture is positive or a goiter has significant volume increase or symptomes. Less frequent indications are hyperthyroidism when it is refractory to treatment with Iodine 131 or it is contraindicated, and in cases of symptomatic thyroiditis. The thyroid gland has an important anatomic relation whith the inferior laryngeal nerve and the parathyroid glands, for this reason it is imperative to perform extremely meticulous dissection to recognize each one of these elements and ensure their preservation. It is also essential to maintain strict hemostasis, in order to avoid any postoperative bleeding that could lead to a suffocating neck hematoma, feared complication that represents a surgical emergency and endangers the patient’s life.It is essential to run a formal technique, without skipping steps, and maintain prudence and patience that should rule any surgical act.
International Nuclear Information System (INIS)
Dean, N.W.
1978-01-01
New data showing that the photon-nucleon total cross section increases with energy for ν > or = 50 GeV invalidate earlier comparisons with dispersion relations. Parametrization of the data are presented and used in a new formulation of the dispersion relations, in which an assumed asymptotic behavior avoids the need for subtraction. With this form the fitted amplitude can be compared directly with the Thomson limit. The experimental uncertainties are shown to have a significant effect upon such a comparison
Methodologies of Uncertainty Propagation Calculation
International Nuclear Information System (INIS)
Chojnacki, Eric
2002-01-01
After recalling the theoretical principle and the practical difficulties of the methodologies of uncertainty propagation calculation, the author discussed how to propagate input uncertainties. He said there were two kinds of input uncertainty: - variability: uncertainty due to heterogeneity, - lack of knowledge: uncertainty due to ignorance. It was therefore necessary to use two different propagation methods. He demonstrated this in a simple example which he generalised, treating the variability uncertainty by the probability theory and the lack of knowledge uncertainty by the fuzzy theory. He cautioned, however, against the systematic use of probability theory which may lead to unjustifiable and illegitimate precise answers. Mr Chojnacki's conclusions were that the importance of distinguishing variability and lack of knowledge increased as the problem was getting more and more complex in terms of number of parameters or time steps, and that it was necessary to develop uncertainty propagation methodologies combining probability theory and fuzzy theory
LOFT uncertainty-analysis methodology
International Nuclear Information System (INIS)
Lassahn, G.D.
1983-01-01
The methodology used for uncertainty analyses of measurements in the Loss-of-Fluid Test (LOFT) nuclear-reactor-safety research program is described and compared with other methodologies established for performing uncertainty analyses
LOFT uncertainty-analysis methodology
International Nuclear Information System (INIS)
Lassahn, G.D.
1983-01-01
The methodology used for uncertainty analyses of measurements in the Loss-of-Fluid Test (LOFT) nuclear reactor safety research program is described and compared with other methodologies established for performing uncertainty analyses
Uncertainties in the Norwegian greenhouse gas emission inventory
Energy Technology Data Exchange (ETDEWEB)
Flugsrud, Ketil; Hoem, Britta
2011-11-15
The national greenhouse gas (GHG) emission inventory is compiled from estimates based on emission factors and activity data and from direct measurements by plants. All these data and parameters will contribute to the overall inventory uncertainty. The uncertainties and probability distributions of the inventory input parameters have been assessed based on available data and expert judgements.Finally, the level and trend uncertainties of the national GHG emission inventory have been estimated using Monte Carlo simulation. The methods used in the analysis correspond to an IPCC tier 2 method, as described in the IPCC Good Practice Guidance (IPCC 2000) (IPCC 2000). Analyses have been made both excluding and including the sector LULUCF (land use, land-use change and forestry). The uncertainty analysis performed in 2011 is an update of the uncertainty analyses performed for the greenhouse gas inventory in 2006 and 2000. During the project we have been in contact with experts, and have collected information about uncertainty from them. Main focus has been on the source categories where changes have occured since the last uncertainty analysis was performed in 2006. This includes new methodology for several source categories (for example for solvents and road traffic) as well as revised uncertainty estimates. For the installations included in the emission trading system, new information from the annual ETS reports about uncertainty in activity data and CO2 emission factor (and N2O emission factor for nitric acid production) has been used. This has improved the quality of the uncertainty estimates for the energy and manufacturing sectors. The results show that the uncertainty level in the total calculated greenhouse gas emissions for 2009 is around 4 per cent. When including the LULUCF sector, the total uncertainty is around 17 per cent in 2009. The uncertainty estimate is lower now than previous analyses have shown. This is partly due to a considerable work made to improve
Do Orthopaedic Surgeons Acknowledge Uncertainty?
Teunis, Teun; Janssen, Stein; Guitton, Thierry G.; Ring, David; Parisien, Robert
2016-01-01
Much of the decision-making in orthopaedics rests on uncertain evidence. Uncertainty is therefore part of our normal daily practice, and yet physician uncertainty regarding treatment could diminish patients' health. It is not known if physician uncertainty is a function of the evidence alone or if
Uncertainty analysis of energy consumption in dwellings
Energy Technology Data Exchange (ETDEWEB)
Pettersen, Trine Dyrstad
1997-12-31
This thesis presents a comprehensive study of an energy estimation model that can be used to examine the uncertainty of predicted energy consumption in a dwelling. The variation and uncertainty of input parameters due to the outdoor climate, the building construction and the inhabitants are studied as a basis for further energy evaluations. The occurring variations of energy consumption in nominal similar dwellings are also investigated due to verification of the simulated energy consumption. The main topics are (1) a study of expected variations and uncertainties in both input parameters used in energy consumption calculations and the energy consumption in the dwelling, (2) the development and evaluation of a simplified energy calculation model that considers uncertainties due to the input parameters, (3) an evaluation of the influence of the uncertain parameters on the total variation so that the most important parameters can be identified, and (4) the recommendation of a simplified procedure for treating uncertainties or possible deviations from average conditions. 90 refs., 182 figs., 73 tabs.
Propagation of nuclear data uncertainties for fusion power measurements
Directory of Open Access Journals (Sweden)
Sjöstrand Henrik
2017-01-01
Full Text Available Neutron measurements using neutron activation systems are an essential part of the diagnostic system at large fusion machines such as JET and ITER. Nuclear data is used to infer the neutron yield. Consequently, high-quality nuclear data is essential for the proper determination of the neutron yield and fusion power. However, uncertainties due to nuclear data are not fully taken into account in uncertainty analysis for neutron yield calibrations using activation foils. This paper investigates the neutron yield uncertainty due to nuclear data using the so-called Total Monte Carlo Method. The work is performed using a detailed MCNP model of the JET fusion machine; the uncertainties due to the cross-sections and angular distributions in JET structural materials, as well as the activation cross-sections in the activation foils, are analysed. It is found that a significant contribution to the neutron yield uncertainty can come from uncertainties in the nuclear data.
DEFF Research Database (Denmark)
Greasley, David; Madsen, Jakob B.
2006-01-01
A severe collapse of fixed capital formation distinguished the onset of the Great Depression from other investment downturns between the world wars. Using a model estimated for the years 1890-2000, we show that the expected profitability of capital measured by Tobin's q, and the uncertainty...... surrounding expected profits indicated by share price volatility, were the chief influences on investment levels, and that heightened share price volatility played the dominant role in the crucial investment collapse in 1930. Investment did not simply follow the downward course of income at the onset...
Optimization under Uncertainty
Lopez, Rafael H.
2016-01-06
The goal of this poster is to present the main approaches to optimization of engineering systems in the presence of uncertainties. We begin by giving an insight about robust optimization. Next, we detail how to deal with probabilistic constraints in optimization, the so called the reliability based design. Subsequently, we present the risk optimization approach, which includes the expected costs of failure in the objective function. After that the basic description of each approach is given, the projects developed by CORE are presented. Finally, the main current topic of research of CORE is described.
Optimizing production under uncertainty
DEFF Research Database (Denmark)
Rasmussen, Svend
This Working Paper derives criteria for optimal production under uncertainty based on the state-contingent approach (Chambers and Quiggin, 2000), and discusses po-tential problems involved in applying the state-contingent approach in a normative context. The analytical approach uses the concept...... of state-contingent production functions and a definition of inputs including both sort of input, activity and alloca-tion technology. It also analyses production decisions where production is combined with trading in state-contingent claims such as insurance contracts. The final part discusses...
Commonplaces and social uncertainty
DEFF Research Database (Denmark)
Lassen, Inger
2008-01-01
This article explores the concept of uncertainty in four focus group discussions about genetically modified food. In the discussions, members of the general public interact with food biotechnology scientists while negotiating their attitudes towards genetic engineering. Their discussions offer...... an example of risk discourse in which the use of commonplaces seems to be a central feature (Myers 2004: 81). My analyses support earlier findings that commonplaces serve important interactional purposes (Barton 1999) and that they are used for mitigating disagreement, for closing topics and for facilitating...
Kadane, Joseph B
2011-01-01
An intuitive and mathematical introduction to subjective probability and Bayesian statistics. An accessible, comprehensive guide to the theory of Bayesian statistics, Principles of Uncertainty presents the subjective Bayesian approach, which has played a pivotal role in game theory, economics, and the recent boom in Markov Chain Monte Carlo methods. Both rigorous and friendly, the book contains: Introductory chapters examining each new concept or assumption Just-in-time mathematics -- the presentation of ideas just before they are applied Summary and exercises at the end of each chapter Discus
Mathematical Analysis of Uncertainty
Directory of Open Access Journals (Sweden)
Angel GARRIDO
2016-01-01
Full Text Available Classical Logic showed early its insufficiencies for solving AI problems. The introduction of Fuzzy Logic aims at this problem. There have been research in the conventional Rough direction alone or in the Fuzzy direction alone, and more recently, attempts to combine both into Fuzzy Rough Sets or Rough Fuzzy Sets. We analyse some new and powerful tools in the study of Uncertainty, as the Probabilistic Graphical Models, Chain Graphs, Bayesian Networks, and Markov Networks, integrating our knowledge of graphs and probability.
International Nuclear Information System (INIS)
Monte, Luigi; Hakanson, Lars; Bergstroem, Ulla; Brittain, John; Heling, Rudie
1996-01-01
The principles of Empirically Based Uncertainty Analysis (EBUA) are described. EBUA is based on the evaluation of 'performance indices' that express the level of agreement between the model and sets of empirical independent data collected in different experimental circumstances. Some of these indices may be used to evaluate the confidence limits of the model output. The method is based on the statistical analysis of the distribution of the index values and on the quantitative relationship of these values with the ratio 'experimental data/model output'. Some performance indices are described in the present paper. Among these, the so-called 'functional distance' (d) between the logarithm of model output and the logarithm of the experimental data, defined as d 2 =Σ n 1 ( ln M i - ln O i ) 2 /n where M i is the i-th experimental value, O i the corresponding model evaluation and n the number of the couplets 'experimental value, predicted value', is an important tool for the EBUA method. From the statistical distribution of this performance index, it is possible to infer the characteristics of the distribution of the ratio 'experimental data/model output' and, consequently to evaluate the confidence limits for the model predictions. This method was applied to calculate the uncertainty level of a model developed to predict the migration of radiocaesium in lacustrine systems. Unfortunately, performance indices are affected by the uncertainty of the experimental data used in validation. Indeed, measurement results of environmental levels of contamination are generally associated with large uncertainty due to the measurement and sampling techniques and to the large variability in space and time of the measured quantities. It is demonstrated that this non-desired effect, in some circumstances, may be corrected by means of simple formulae
Investment, regulation, and uncertainty
Smyth, Stuart J; McDonald, Jillian; Falck-Zepeda, Jose
2014-01-01
As with any technological innovation, time refines the technology, improving upon the original version of the innovative product. The initial GM crops had single traits for either herbicide tolerance or insect resistance. Current varieties have both of these traits stacked together and in many cases other abiotic and biotic traits have also been stacked. This innovation requires investment. While this is relatively straight forward, certain conditions need to exist such that investments can be facilitated. The principle requirement for investment is that regulatory frameworks render consistent and timely decisions. If the certainty of regulatory outcomes weakens, the potential for changes in investment patterns increases. This article provides a summary background to the leading plant breeding technologies that are either currently being used to develop new crop varieties or are in the pipeline to be applied to plant breeding within the next few years. Challenges for existing regulatory systems are highlighted. Utilizing an option value approach from investment literature, an assessment of uncertainty regarding the regulatory approval for these varying techniques is undertaken. This research highlights which technology development options have the greatest degree of uncertainty and hence, which ones might be expected to see an investment decline. PMID:24499745
Probabilistic Mass Growth Uncertainties
Plumer, Eric; Elliott, Darren
2013-01-01
Mass has been widely used as a variable input parameter for Cost Estimating Relationships (CER) for space systems. As these space systems progress from early concept studies and drawing boards to the launch pad, their masses tend to grow substantially, hence adversely affecting a primary input to most modeling CERs. Modeling and predicting mass uncertainty, based on historical and analogous data, is therefore critical and is an integral part of modeling cost risk. This paper presents the results of a NASA on-going effort to publish mass growth datasheet for adjusting single-point Technical Baseline Estimates (TBE) of masses of space instruments as well as spacecraft, for both earth orbiting and deep space missions at various stages of a project's lifecycle. This paper will also discusses the long term strategy of NASA Headquarters in publishing similar results, using a variety of cost driving metrics, on an annual basis. This paper provides quantitative results that show decreasing mass growth uncertainties as mass estimate maturity increases. This paper's analysis is based on historical data obtained from the NASA Cost Analysis Data Requirements (CADRe) database.
Oil price uncertainty in Canada
Energy Technology Data Exchange (ETDEWEB)
Elder, John [Department of Finance and Real Estate, 1272 Campus Delivery, Colorado State University, Fort Collins, CO 80523 (United States); Serletis, Apostolos [Department of Economics, University of Calgary, Calgary, Alberta (Canada)
2009-11-15
Bernanke [Bernanke, Ben S. Irreversibility, uncertainty, and cyclical investment. Quarterly Journal of Economics 98 (1983), 85-106.] shows how uncertainty about energy prices may induce optimizing firms to postpone investment decisions, thereby leading to a decline in aggregate output. Elder and Serletis [Elder, John and Serletis, Apostolos. Oil price uncertainty.] find empirical evidence that uncertainty about oil prices has tended to depress investment in the United States. In this paper we assess the robustness of these results by investigating the effects of oil price uncertainty in Canada. Our results are remarkably similar to existing results for the United States, providing additional evidence that uncertainty about oil prices may provide another explanation for why the sharp oil price declines of 1985 failed to produce rapid output growth. Impulse-response analysis suggests that uncertainty about oil prices may tend to reinforce the negative response of output to positive oil shocks. (author)
Total electron scattering cross section from pyridine molecules in the energy range 10-1000 eV
Dubuis, A. Traoré; Costa, F.; da Silva, F. Ferreira; Limão-Vieira, P.; Oller, J. C.; Blanco, F.; García, G.
2018-05-01
We report on experimental total electron scattering cross-section (TCS) from pyridine (C5H5N) for incident electron energies between 10 and 1000 eV, with experimental uncertainties within 5-10%, as measured with a double electrostatic analyser apparatus. The experimental results are compared with our theoretical calculations performed within the independent atom model complemented with a screening corrected additivity rule (IAM-SCAR) procedure which has been updated by including interference effects. A good level of agreement is found between both data sources within the experimental uncertainties. The present TCS results for electron impact energy under study contribute, together with other scattering data available in the literature, to achieve a consistent set of cross section data for modelling purposes.
Quantification of margins and uncertainties: Alternative representations of epistemic uncertainty
International Nuclear Information System (INIS)
Helton, Jon C.; Johnson, Jay D.
2011-01-01
In 2001, the National Nuclear Security Administration of the U.S. Department of Energy in conjunction with the national security laboratories (i.e., Los Alamos National Laboratory, Lawrence Livermore National Laboratory and Sandia National Laboratories) initiated development of a process designated Quantification of Margins and Uncertainties (QMU) for the use of risk assessment methodologies in the certification of the reliability and safety of the nation's nuclear weapons stockpile. A previous presentation, 'Quantification of Margins and Uncertainties: Conceptual and Computational Basis,' describes the basic ideas that underlie QMU and illustrates these ideas with two notional examples that employ probability for the representation of aleatory and epistemic uncertainty. The current presentation introduces and illustrates the use of interval analysis, possibility theory and evidence theory as alternatives to the use of probability theory for the representation of epistemic uncertainty in QMU-type analyses. The following topics are considered: the mathematical structure of alternative representations of uncertainty, alternative representations of epistemic uncertainty in QMU analyses involving only epistemic uncertainty, and alternative representations of epistemic uncertainty in QMU analyses involving a separation of aleatory and epistemic uncertainty. Analyses involving interval analysis, possibility theory and evidence theory are illustrated with the same two notional examples used in the presentation indicated above to illustrate the use of probability to represent aleatory and epistemic uncertainty in QMU analyses.
Plurality of Type A evaluations of uncertainty
Possolo, Antonio; Pintar, Adam L.
2017-10-01
The evaluations of measurement uncertainty involving the application of statistical methods to measurement data (Type A evaluations as specified in the Guide to the Expression of Uncertainty in Measurement, GUM) comprise the following three main steps: (i) developing a statistical model that captures the pattern of dispersion or variability in the experimental data, and that relates the data either to the measurand directly or to some intermediate quantity (input quantity) that the measurand depends on; (ii) selecting a procedure for data reduction that is consistent with this model and that is fit for the purpose that the results are intended to serve; (iii) producing estimates of the model parameters, or predictions based on the fitted model, and evaluations of uncertainty that qualify either those estimates or these predictions, and that are suitable for use in subsequent uncertainty propagation exercises. We illustrate these steps in uncertainty evaluations related to the measurement of the mass fraction of vanadium in a bituminous coal reference material, including the assessment of the homogeneity of the material, and to the calibration and measurement of the amount-of-substance fraction of a hydrochlorofluorocarbon in air, and of the age of a meteorite. Our goal is to expose the plurality of choices that can reasonably be made when taking each of the three steps outlined above, and to show that different choices typically lead to different estimates of the quantities of interest, and to different evaluations of the associated uncertainty. In all the examples, the several alternatives considered represent choices that comparably competent statisticians might make, but who differ in the assumptions that they are prepared to rely on, and in their selection of approach to statistical inference. They represent also alternative treatments that the same statistician might give to the same data when the results are intended for different purposes.
Energy Technology Data Exchange (ETDEWEB)
NONE
1991-03-01
For the purpose of securing the stable supply of non-ferrous metal resources such as Ni, Co, Cu and Mn and of improving the overall ocean development technology, the R and D were carried out of a technology to mine Mn nodules existing in the bottom of deep sea of the depth between 4,000m and 6,000m by the fluid dredging method which is highly efficient and reliable. Therefore, plans of experiments to be made for each subsystem are adjusted/integrated to make the contents of a comprehensive experiment, and at the same time a draft of the comprehensive experiment plan was worked out for an experimental system to be developed/trially fabricated. As to a total system (experimental plan), a plan for execution was studied in the case of rationalizing the comprehensive ocean experiment plan and choosing between pump lift and air lift. The TA-C sea area was continuously investigated as a candidate sea area for survey and the range was limited. The paper studied the basic plan of the mining system and commenced the basic study of the assessment of economical efficiency. (NEDO)
Uncertainty Quantification of Composite Laminate Damage with the Generalized Information Theory
Energy Technology Data Exchange (ETDEWEB)
J. Lucero; F. Hemez; T. Ross; K.Kline; J.Hundhausen; T. Tippetts
2006-05-01
This work presents a survey of five theories to assess the uncertainty of projectile impact induced damage on multi-layered carbon-epoxy composite plates. Because the types of uncertainty dealt with in this application are multiple (variability, ambiguity, and conflict) and because the data sets collected are sparse, characterizing the amount of delamination damage with probability theory alone is possible but incomplete. This motivates the exploration of methods contained within a broad Generalized Information Theory (GIT) that rely on less restrictive assumptions than probability theory. Probability, fuzzy sets, possibility, and imprecise probability (probability boxes (p-boxes) and Dempster-Shafer) are used to assess the uncertainty in composite plate damage. Furthermore, this work highlights the usefulness of each theory. The purpose of the study is not to compare directly the different GIT methods but to show that they can be deployed on a practical application and to compare the assumptions upon which these theories are based. The data sets consist of experimental measurements and finite element predictions of the amount of delamination and fiber splitting damage as multilayered composite plates are impacted by a projectile at various velocities. The physical experiments consist of using a gas gun to impact suspended plates with a projectile accelerated to prescribed velocities, then, taking ultrasound images of the resulting delamination. The nonlinear, multiple length-scale numerical simulations couple local crack propagation implemented through cohesive zone modeling to global stress-displacement finite element analysis. The assessment of damage uncertainty is performed in three steps by, first, considering the test data only; then, considering the simulation data only; finally, performing an assessment of total uncertainty where test and simulation data sets are combined. This study leads to practical recommendations for reducing the uncertainty and
Evaluating measurement uncertainty in fluid phase equilibrium calculations
van der Veen, Adriaan M. H.
2018-04-01
The evaluation of measurement uncertainty in accordance with the ‘Guide to the expression of uncertainty in measurement’ (GUM) has not yet become widespread in physical chemistry. With only the law of the propagation of uncertainty from the GUM, many of these uncertainty evaluations would be cumbersome, as models are often non-linear and require iterative calculations. The methods from GUM supplements 1 and 2 enable the propagation of uncertainties under most circumstances. Experimental data in physical chemistry are used, for example, to derive reference property data and support trade—all applications where measurement uncertainty plays an important role. This paper aims to outline how the methods for evaluating and propagating uncertainty can be applied to some specific cases with a wide impact: deriving reference data from vapour pressure data, a flash calculation, and the use of an equation-of-state to predict the properties of both phases in a vapour-liquid equilibrium. The three uncertainty evaluations demonstrate that the methods of GUM and its supplements are a versatile toolbox that enable us to evaluate the measurement uncertainty of physical chemical measurements, including the derivation of reference data, such as the equilibrium thermodynamical properties of fluids.
Measurement uncertainty analysis techniques applied to PV performance measurements
International Nuclear Information System (INIS)
Wells, C.
1992-10-01
The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results
Heisenberg's principle of uncertainty and the uncertainty relations
International Nuclear Information System (INIS)
Redei, Miklos
1987-01-01
The usual verbal form of the Heisenberg uncertainty principle and the usual mathematical formulation (the so-called uncertainty theorem) are not equivalent. The meaning of the concept 'uncertainty' is not unambiguous and different interpretations are used in the literature. Recently a renewed interest has appeared to reinterpret and reformulate the precise meaning of Heisenberg's principle and to find adequate mathematical form. The suggested new theorems are surveyed and critically analyzed. (D.Gy.) 20 refs
Petzinger, Tom
I am trying to make money in the biotech industry from complexity science. And I am doing it with inspiration that I picked up on the edge of Appalachia spending time with June Holley and ACEnet when I was a Wall Street Journal reporter. I took some of those ideas to Pittsburgh, in biotechnology, in a completely private setting with an economic development focus, but also with a mission t o return profit to private capital. And we are doing that. I submit as a hypothesis, something we are figuring out in the post- industrial era, that business evolves. It is not the definition of business, but business critically involves the design of systems in which uncertainty is treated as a certainty. That is what I have seen and what I have tried to put into practice.
International Nuclear Information System (INIS)
Peters, H.P.; Hennen, L.
1990-01-01
The authors report on the results of three representative surveys that made a closer inquiry into perceptions and valuations of information and information sources concering Chernobyl. If turns out that the information sources are generally considered little trustworthy. This was generally attributable to the interpretation of the events being tied to attitudes in the atmonic energy issue. The greatest credit was given to television broadcasting. The authors summarize their discourse as follows: There is good reason to interpret the widespread uncertainty after Chernobyl as proof of the fact that large parts of the population are prepared and willing to assume a critical stance towards information and prefer to draw their information from various sources representing different positions. (orig.) [de
2012-03-01
ISO / IEC 17025 Inspection Bodies – ISO / IEC 17020 RMPs – ISO Guide 34 (Reference...certify to : ISO 9001 (QMS), ISO 14001 (EMS), TS 16949 (US Automotive) etc. 2 3 DoD QSM 4.2 standard ISO / IEC 17025 :2005 Each has uncertainty...IPV6, NLLAP, NEFAP TRAINING Programs Certification Bodies – ISO / IEC 17021 Accreditation for Management System
Traceability and Measurement Uncertainty
DEFF Research Database (Denmark)
Tosello, Guido; De Chiffre, Leonardo
2004-01-01
. The project partnership aims (composed by 7 partners in 5 countries, thus covering a real European spread in high tech production technology) to develop and implement an advanced e-learning system that integrates contributions from quite different disciplines into a user-centred approach that strictly....... Machine tool testing 9. The role of manufacturing metrology for QM 10. Inspection planning 11. Quality management of measurements incl. Documentation 12. Advanced manufacturing measurement technology The present report (which represents the section 2 - Traceability and Measurement Uncertainty – of the e-learning......This report is made as a part of the project ‘Metro-E-Learn: European e-Learning in Manufacturing Metrology’, an EU project under the program SOCRATES MINERVA (ODL and ICT in Education), Contract No: 101434-CP-1-2002-1-DE-MINERVA, coordinated by Friedrich-Alexander-University Erlangen...
Decision making under uncertainty
International Nuclear Information System (INIS)
Cyert, R.M.
1989-01-01
This paper reports on ways of improving the reliability of products and systems in this country if we are to survive as a first-rate industrial power. The use of statistical techniques have, since the 1920s, been viewed as one of the methods for testing quality and estimating the level of quality in a universe of output. Statistical quality control is not relevant, generally, to improving systems in an industry like yours, but certainly the use of probability concepts is of significance. In addition, when it is recognized that part of the problem involves making decisions under uncertainty, it becomes clear that techniques such as sequential decision making and Bayesian analysis become major methodological approaches that must be utilized
Sustainability and uncertainty
DEFF Research Database (Denmark)
Jensen, Karsten Klint
2007-01-01
The widely used concept of sustainability is seldom precisely defined, and its clarification involves making up one's mind about a range of difficult questions. One line of research (bottom-up) takes sustaining a system over time as its starting point and then infers prescriptions from...... this requirement. Another line (top-down) takes an economical interpretation of the Brundtland Commission's suggestion that the present generation's needsatisfaction should not compromise the need-satisfaction of future generations as its starting point. It then measures sustainability at the level of society...... a clarified ethical goal, disagreements can arise. At present we do not know what substitutions will be possible in the future. This uncertainty clearly affects the prescriptions that follow from the measure of sustainability. Consequently, decisions about how to make future agriculture sustainable...
Total cross sections for heavy flavour production at HERA
Frixione, Stefano; Nason, P; Ridolfi, G; Frixione, S; Mangano, M L; Nason, P; Ridolfi, G
1995-01-01
We compute total cross sections for charm and bottom photoproduction at HERA energies, and discuss the relevant theoretical uncertainties. In particular we discuss the problems arising from the small-x region, the uncertainties in the gluon parton density, and the uncertainties in the hadronic component of the cross section. Total electroproduction cross sections, calculated in the Weizs\\"acker-Williams approximation, are also given.
Optimization of FRAP uncertainty analysis option
International Nuclear Information System (INIS)
Peck, S.O.
1979-10-01
The automated uncertainty analysis option that has been incorporated in the FRAP codes (FRAP-T5 and FRAPCON-2) provides the user with a means of obtaining uncertainty bands on code predicted variables at user-selected times during a fuel pin analysis. These uncertainty bands are obtained by multiple single fuel pin analyses to generate data which can then be analyzed by second order statistical error propagation techniques. In this process, a considerable amount of data is generated and stored on tape. The user has certain choices to make regarding which independent variables are to be used in the analysis and what order of error propagation equation should be used in modeling the output response. To aid the user in these decisions, a computer program, ANALYZ, has been written and added to the uncertainty analysis option package. A variety of considerations involved in fitting response surface equations and certain pit-falls of which the user should be aware are discussed. An equation is derived expressing a residual as a function of a fitted model and an assumed true model. A variety of experimental design choices are discussed, including the advantages and disadvantages of each approach. Finally, a description of the subcodes which constitute program ANALYZ is provided
An uncertainty inventory demonstration - a primary step in uncertainty quantification
Energy Technology Data Exchange (ETDEWEB)
Langenbrunner, James R. [Los Alamos National Laboratory; Booker, Jane M [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Salazar, Issac F [Los Alamos National Laboratory; Ross, Timothy J [UNM
2009-01-01
Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.
Managing Measurement Uncertainty in Building Acoustics
Directory of Open Access Journals (Sweden)
Chiara Scrosati
2015-12-01
Full Text Available In general, uncertainties should preferably be determined following the principles laid down in ISO/IEC Guide 98-3, the Guide to the expression of uncertainty in measurement (GUM:1995. According to current knowledge, it seems impossible to formulate these models for the different quantities in building acoustics. Therefore, the concepts of repeatability and reproducibility are necessary to determine the uncertainty of building acoustics measurements. This study shows the uncertainty of field measurements of a lightweight wall, a heavyweight floor, a façade with a single glazing window and a façade with double glazing window that were analyzed by a Round Robin Test (RRT, conducted in a full-scale experimental building at ITC-CNR (Construction Technologies Institute of the National Research Council of Italy. The single number quantities and their uncertainties were evaluated in both narrow and enlarged range and it was shown that including or excluding the low frequencies leads to very significant differences, except in the case of the sound insulation of façades with single glazing window. The results obtained in these RRTs were compared with other results from literature, which confirm the increase of the uncertainty of single number quantities due to the low frequencies extension. Having stated the measurement uncertainty for a single measurement, in building acoustics, it is also very important to deal with sampling for the purposes of classification of buildings or building units. Therefore, this study also shows an application of the sampling included in the Italian Standard on the acoustic classification of building units on a serial type building consisting of 47 building units. It was found that the greatest variability is observed in the façade and it depends on both the great variability of window’s typologies and on workmanship. Finally, it is suggested how to manage the uncertainty in building acoustics, both for one single
Tolerance for uncertainty in elderly people
Directory of Open Access Journals (Sweden)
KHRYSTYNA KACHMARYK
2014-09-01
Full Text Available The aim of the study. The aim of the paper is a comparison of tolerance to uncertainty in two groups of elderly: the students of the University of the Third Age (UTA and older people who are not enrolled but help to educate grandchildren. A relation to uncertainty was shown to influence on decision making strategy of elderly that indicates on importance of the researches. Methods. To obtain the objectives of the paper the following methods were used: 1 Personal change readiness survey (PCRS adapted by Nickolay Bazhanov and Galina Bardiyer; 2 Tolerance Ambiguity Scale (TAS adapted by Galina Soldatova; 3 Freiburg personality inventory (FPI and 4 The questionnaire of self-relation by Vladimir Stolin and Sergej Panteleev. 40 socially involved elderly people were investigated according the above methods, 20 from UTA and 20 who are not studied and served as control group. Results. It was shown that relations of tolerance to uncertainty in the study group of students of the University of the Third Age substantially differ from relations of tolerance to uncertainty in group of older people who do not learn. The majority of students of the University of the Third Age have an inherent low tolerance for uncertainty, which is associated with an increase in expression personality traits and characteristics in self-relation. The group of the elderly who are not enrolled increasingly shows tolerance of uncertainty, focusing on the social and trusting relationship to meet the needs of communication, and the ability to manage their own emotions and desires than a group of Third Age university students. Conclusions. The results of experimental research of the third age university student’s peculiarities of the tolerance to uncertainty were outlined. It was found that decision making in the ambiguity situations concerning social interaction is well developed in elderly who do not study. The students of the University of Third Age have greater needs in
Essays on model uncertainty in financial models
Li, Jing
2018-01-01
This dissertation studies model uncertainty, particularly in financial models. It consists of two empirical chapters and one theoretical chapter. The first empirical chapter (Chapter 2) classifies model uncertainty into parameter uncertainty and misspecification uncertainty. It investigates the
Evaluation of uncertainties in the calibration of radiation survey meter
International Nuclear Information System (INIS)
Potiens, M.P.A.; Santos, G.P.
2006-01-01
In order to meet the requirements of ISO 17025, the quantification of the expanded uncertainties of experimental data in the calibration of survey meters must be carried out using well defined concepts, like those expressed in the 'ISO-Guide to the Expression of Uncertainty in Measurement'. The calibration procedure of gamma ray survey meters involves two values that have to get their uncertainties clearly known: measurements of the instrument under calibration and the conventional true values of a quantity. Considering the continuous improvement of the calibration methods and set-ups, it is necessary to evaluate periodically the involved uncertainties in the procedures. In this work it is shown how the measurement uncertainties of an individual calibration can be estimated and how it can be generalized to be valid for others radiation survey meters. (authors)
Uncertainties in the simulation of groundwater recharge at different scales
Directory of Open Access Journals (Sweden)
H. Bogena
2005-01-01
Full Text Available Digital spatial data always imply some kind of uncertainty. The source of this uncertainty can be found in their compilation as well as the conceptual design that causes a more or less exact abstraction of the real world, depending on the scale under consideration. Within the framework of hydrological modelling, in which numerous data sets from diverse sources of uneven quality are combined, the various uncertainties are accumulated. In this study, the GROWA model is taken as an example to examine the effects of different types of uncertainties on the calculated groundwater recharge. Distributed input errors are determined for the parameters' slope and aspect using a Monte Carlo approach. Landcover classification uncertainties are analysed by using the conditional probabilities of a remote sensing classification procedure. The uncertainties of data ensembles at different scales and study areas are discussed. The present uncertainty analysis showed that the Gaussian error propagation method is a useful technique for analysing the influence of input data on the simulated groundwater recharge. The uncertainties involved in the land use classification procedure and the digital elevation model can be significant in some parts of the study area. However, for the specific model used in this study it was shown that the precipitation uncertainties have the greatest impact on the total groundwater recharge error.
Facing uncertainty in ecosystem services-based resource management.
Grêt-Regamey, Adrienne; Brunner, Sibyl H; Altwegg, Jürg; Bebi, Peter
2013-09-01
The concept of ecosystem services is increasingly used as a support for natural resource management decisions. While the science for assessing ecosystem services is improving, appropriate methods to address uncertainties in a quantitative manner are missing. Ignoring parameter uncertainties, modeling uncertainties and uncertainties related to human-environment interactions can modify decisions and lead to overlooking important management possibilities. In this contribution, we present a new approach for mapping the uncertainties in the assessment of multiple ecosystem services. The spatially explicit risk approach links Bayesian networks to a Geographic Information System for forecasting the value of a bundle of ecosystem services and quantifies the uncertainties related to the outcomes in a spatially explicit manner. We demonstrate that mapping uncertainties in ecosystem services assessments provides key information for decision-makers seeking critical areas in the delivery of ecosystem services in a case study in the Swiss Alps. The results suggest that not only the total value of the bundle of ecosystem services is highly dependent on uncertainties, but the spatial pattern of the ecosystem services values changes substantially when considering uncertainties. This is particularly important for the long-term management of mountain forest ecosystems, which have long rotation stands and are highly sensitive to pressing climate and socio-economic changes. Copyright © 2012 Elsevier Ltd. All rights reserved.
A new uncertainty importance measure
International Nuclear Information System (INIS)
Borgonovo, E.
2007-01-01
Uncertainty in parameters is present in many risk assessment problems and leads to uncertainty in model predictions. In this work, we introduce a global sensitivity indicator which looks at the influence of input uncertainty on the entire output distribution without reference to a specific moment of the output (moment independence) and which can be defined also in the presence of correlations among the parameters. We discuss its mathematical properties and highlight the differences between the present indicator, variance-based uncertainty importance measures and a moment independent sensitivity indicator previously introduced in the literature. Numerical results are discussed with application to the probabilistic risk assessment model on which Iman [A matrix-based approach to uncertainty and sensitivity analysis for fault trees. Risk Anal 1987;7(1):22-33] first introduced uncertainty importance measures
Uncertainty Management and Sensitivity Analysis
DEFF Research Database (Denmark)
Rosenbaum, Ralph K.; Georgiadis, Stylianos; Fantke, Peter
2018-01-01
Uncertainty is always there and LCA is no exception to that. The presence of uncertainties of different types and from numerous sources in LCA results is a fact, but managing them allows to quantify and improve the precision of a study and the robustness of its conclusions. LCA practice sometimes...... suffers from an imbalanced perception of uncertainties, justifying modelling choices and omissions. Identifying prevalent misconceptions around uncertainties in LCA is a central goal of this chapter, aiming to establish a positive approach focusing on the advantages of uncertainty management. The main...... objectives of this chapter are to learn how to deal with uncertainty in the context of LCA, how to quantify it, interpret and use it, and how to communicate it. The subject is approached more holistically than just focusing on relevant statistical methods or purely mathematical aspects. This chapter...
Additivity of entropic uncertainty relations
Directory of Open Access Journals (Sweden)
René Schwonnek
2018-03-01
Full Text Available We consider the uncertainty between two pairs of local projective measurements performed on a multipartite system. We show that the optimal bound in any linear uncertainty relation, formulated in terms of the Shannon entropy, is additive. This directly implies, against naive intuition, that the minimal entropic uncertainty can always be realized by fully separable states. Hence, in contradiction to proposals by other authors, no entanglement witness can be constructed solely by comparing the attainable uncertainties of entangled and separable states. However, our result gives rise to a huge simplification for computing global uncertainty bounds as they now can be deduced from local ones. Furthermore, we provide the natural generalization of the Maassen and Uffink inequality for linear uncertainty relations with arbitrary positive coefficients.
International Nuclear Information System (INIS)
Sutherland, D.E.; Ferguson, R.M.; Simmons, R.L.; Kim, T.H.; Slavin, S.; Najarian, J.S.
1983-01-01
Total lymphoid irradiation by itself can produce sufficient immunosuppression to prolong the survival of a variety of organ allografts in experimental animals. The degree of prolongation is dose-dependent and is limited by the toxicity that occurs with higher doses. Total lymphoid irradiation is more effective before transplantation than after, but when used after transplantation can be combined with pharmacologic immunosuppression to achieve a positive effect. In some animal models, total lymphoid irradiation induces an environment in which fully allogeneic bone marrow will engraft and induce permanent chimerism in the recipients who are then tolerant to organ allografts from the donor strain. If total lymphoid irradiation is ever to have clinical applicability on a large scale, it would seem that it would have to be under circumstances in which tolerance can be induced. However, in some animal models graft-versus-host disease occurs following bone marrow transplantation, and methods to obviate its occurrence probably will be needed if this approach is to be applied clinically. In recent years, patient and graft survival rates in renal allograft recipients treated with conventional immunosuppression have improved considerably, and thus the impetus to utilize total lymphoid irradiation for its immunosuppressive effect alone is less compelling. The future of total lymphoid irradiation probably lies in devising protocols in which maintenance immunosuppression can be eliminated, or nearly eliminated, altogether. Such protocols are effective in rodents. Whether they can be applied to clinical transplantation remains to be seen
The cerebellum and decision making under uncertainty.
Blackwood, Nigel; Ffytche, Dominic; Simmons, Andrew; Bentall, Richard; Murray, Robin; Howard, Robert
2004-06-01
This study aimed to identify the neural basis of probabilistic reasoning, a type of inductive inference that aids decision making under conditions of uncertainty. Eight normal subjects performed two separate two-alternative-choice tasks (the balls in a bottle and personality survey tasks) while undergoing functional magnetic resonance imaging (fMRI). The experimental conditions within each task were chosen so that they differed only in their requirement to make a decision under conditions of uncertainty (probabilistic reasoning and frequency determination required) or under conditions of certainty (frequency determination required). The same visual stimuli and motor responses were used in the experimental conditions. We provide evidence that the neo-cerebellum, in conjunction with the premotor cortex, inferior parietal lobule and medial occipital cortex, mediates the probabilistic inferences that guide decision making under uncertainty. We hypothesise that the neo-cerebellum constructs internal working models of uncertain events in the external world, and that such probabilistic models subserve the predictive capacity central to induction. Copyright 2004 Elsevier B.V.
Decommissioning funding: ethics, implementation, uncertainties
International Nuclear Information System (INIS)
2006-01-01
This status report on Decommissioning Funding: Ethics, Implementation, Uncertainties also draws on the experience of the NEA Working Party on Decommissioning and Dismantling (WPDD). The report offers, in a concise form, an overview of relevant considerations on decommissioning funding mechanisms with regard to ethics, implementation and uncertainties. Underlying ethical principles found in international agreements are identified, and factors influencing the accumulation and management of funds for decommissioning nuclear facilities are discussed together with the main sources of uncertainties of funding systems. (authors)
Chemical model reduction under uncertainty
Najm, Habib; Galassi, R. Malpica; Valorani, M.
2016-01-01
We outline a strategy for chemical kinetic model reduction under uncertainty. We present highlights of our existing deterministic model reduction strategy, and describe the extension of the formulation to include parametric uncertainty in the detailed mechanism. We discuss the utility of this construction, as applied to hydrocarbon fuel-air kinetics, and the associated use of uncertainty-aware measures of error between predictions from detailed and simplified models.
Chemical model reduction under uncertainty
Najm, Habib
2016-01-05
We outline a strategy for chemical kinetic model reduction under uncertainty. We present highlights of our existing deterministic model reduction strategy, and describe the extension of the formulation to include parametric uncertainty in the detailed mechanism. We discuss the utility of this construction, as applied to hydrocarbon fuel-air kinetics, and the associated use of uncertainty-aware measures of error between predictions from detailed and simplified models.
Uncertainty analysis of environmental models
International Nuclear Information System (INIS)
Monte, L.
1990-01-01
In the present paper an evaluation of the output uncertainty of an environmental model for assessing the transfer of 137 Cs and 131 I in the human food chain are carried out on the basis of a statistical analysis of data reported by the literature. The uncertainty analysis offers the oppotunity of obtaining some remarkable information about the uncertainty of models predicting the migration of non radioactive substances in the environment mainly in relation to the dry and wet deposition
Reliability analysis under epistemic uncertainty
International Nuclear Information System (INIS)
Nannapaneni, Saideep; Mahadevan, Sankaran
2016-01-01
This paper proposes a probabilistic framework to include both aleatory and epistemic uncertainty within model-based reliability estimation of engineering systems for individual limit states. Epistemic uncertainty is considered due to both data and model sources. Sparse point and/or interval data regarding the input random variables leads to uncertainty regarding their distribution types, distribution parameters, and correlations; this statistical uncertainty is included in the reliability analysis through a combination of likelihood-based representation, Bayesian hypothesis testing, and Bayesian model averaging techniques. Model errors, which include numerical solution errors and model form errors, are quantified through Gaussian process models and included in the reliability analysis. The probability integral transform is used to develop an auxiliary variable approach that facilitates a single-level representation of both aleatory and epistemic uncertainty. This strategy results in an efficient single-loop implementation of Monte Carlo simulation (MCS) and FORM/SORM techniques for reliability estimation under both aleatory and epistemic uncertainty. Two engineering examples are used to demonstrate the proposed methodology. - Highlights: • Epistemic uncertainty due to data and model included in reliability analysis. • A novel FORM-based approach proposed to include aleatory and epistemic uncertainty. • A single-loop Monte Carlo approach proposed to include both types of uncertainties. • Two engineering examples used for illustration.
UNCERTAINTIES IN GALACTIC CHEMICAL EVOLUTION MODELS
International Nuclear Information System (INIS)
Côté, Benoit; Ritter, Christian; Herwig, Falk; O’Shea, Brian W.; Pignatari, Marco; Jones, Samuel; Fryer, Chris L.
2016-01-01
We use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number of SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions, along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model
Honti, Mark; Reichert, Peter; Scheidegger, Andreas; Stamm, Christian
2013-04-01
Climate change impact assessments have become more and more popular in hydrology since the middle 1980's with another boost after the publication of the IPCC AR4 report. During hundreds of impact studies a quasi-standard methodology emerged, which is mainly shaped by the growing public demand for predicting how water resources management or flood protection should change in the close future. The ``standard'' workflow considers future climate under a specific IPCC emission scenario simulated by global circulation models (GCMs), possibly downscaled by a regional climate model (RCM) and/or a stochastic weather generator. The output from the climate models is typically corrected for bias before feeding it into a calibrated hydrological model, which is run on the past and future meteorological data to analyse the impacts of climate change on the hydrological indicators of interest. The impact predictions are as uncertain as any forecast that tries to describe the behaviour of an extremely complex system decades into the future. Future climate predictions are uncertain due to the scenario uncertainty and the GCM model uncertainty that is obvious on finer resolution than continental scale. Like in any hierarchical model system, uncertainty propagates through the descendant components. Downscaling increases uncertainty with the deficiencies of RCMs and/or weather generators. Bias correction adds a strong deterministic shift to the input data. Finally the predictive uncertainty of the hydrological model ends the cascade that leads to the total uncertainty of the hydrological impact assessment. There is an emerging consensus between many studies on the relative importance of the different uncertainty sources. The prevailing perception is that GCM uncertainty dominates hydrological impact studies. There are only few studies, which found that the predictive uncertainty of hydrological models can be in the same range or even larger than climatic uncertainty. We carried out a
Uncertainty Assessments in Fast Neutron Activation Analysis
International Nuclear Information System (INIS)
W. D. James; R. Zeisler
2000-01-01
Fast neutron activation analysis (FNAA) carried out with the use of small accelerator-based neutron generators is routinely used for major/minor element determinations in industry, mineral and petroleum exploration, and to some extent in research. While the method shares many of the operational procedures and therefore errors inherent to conventional thermal neutron activation analysis, its unique implementation gives rise to additional specific concerns that can result in errors or increased uncertainties of measured quantities. The authors were involved in a recent effort to evaluate irreversible incorporation of oxygen into a standard reference material (SRM) by direct measurement of oxygen by FNAA. That project required determination of oxygen in bottles of the SRM stored in varying environmental conditions and a comparison of the results. We recognized the need to accurately describe the total uncertainty of the measurements to accurately characterize any differences in the resulting average concentrations. It is our intent here to discuss the breadth of potential parameters that have the potential to contribute to the random and nonrandom errors of the method and provide estimates of the magnitude of uncertainty introduced. In addition, we will discuss the steps taken in this recent FNAA project to control quality, assess the uncertainty of the measurements, and evaluate results based on the statistical reproducibility
Uncertainties of Molecular Structural Parameters
International Nuclear Information System (INIS)
Császár, Attila G.
2014-01-01
Full text: The most fundamental property of a molecule is its three-dimensional (3D) structure formed by its constituent atoms (see, e.g., the perfectly regular hexagon associated with benzene). It is generally accepted that knowledge of the detailed structure of a molecule is a prerequisite to determine most of its other properties. What nowadays is a seemingly simple concept, namely that molecules have a structure, was introduced into chemistry in the 19th century. Naturally, the word changed its meaning over the years. Elemental analysis, simple structural formulae, two-dimensional and then 3D structures mark the development of the concept to its modern meaning. When quantum physics and quantum chemistry emerged in the 1920s, the simple concept associating structure with a three-dimensional object seemingly gained a firm support. Nevertheless, what seems self-explanatory today is in fact not so straightforward to justify within quantum mechanics. In quantum chemistry the concept of an equilibrium structure of a molecule is tied to the Born-Oppenheimer approximation but beyond the adiabatic separation of the motions of the nuclei and the electrons the meaning of a structure is still slightly obscured. Putting the conceptual difficulties aside, there are several experimental, empirical, and theoretical techniques to determine structures of molecules. One particular problem, strongly related to the question of uncertainties of “measured” or “computed” structural parameters, is that all the different techniques correspond to different structure definitions and thus yield different structural parameters. Experiments probing the structure of molecules rely on a number of structure definitions, to name just a few: r_0, r_g, r_a, r_s, r_m, etc., and one should also consider the temperature dependence of most of these structural parameters which differ from each other in the way the rovibrational motions of the molecules are treated and how the averaging is
Statistical Uncertainty Quantification of Physical Models during Reflood of LBLOCA
Energy Technology Data Exchange (ETDEWEB)
Oh, Deog Yeon; Seul, Kwang Won; Woo, Sweng Woong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)
2015-05-15
The use of the best-estimate (BE) computer codes in safety analysis for loss-of-coolant accident (LOCA) is the major trend in many countries to reduce the significant conservatism. A key feature of this BE evaluation requires the licensee to quantify the uncertainty of the calculations. So, it is very important how to determine the uncertainty distribution before conducting the uncertainty evaluation. Uncertainty includes those of physical model and correlation, plant operational parameters, and so forth. The quantification process is often performed mainly by subjective expert judgment or obtained from reference documents of computer code. In this respect, more mathematical methods are needed to reasonably determine the uncertainty ranges. The first uncertainty quantification are performed with the various increments for two influential uncertainty parameters to get the calculated responses and their derivatives. The different data set with two influential uncertainty parameters for FEBA tests, are chosen applying more strict criteria for selecting responses and their derivatives, which may be considered as the user’s effect in the CIRCÉ applications. Finally, three influential uncertainty parameters are considered to study the effect on the number of uncertainty parameters due to the limitation of CIRCÉ method. With the determined uncertainty ranges, uncertainty evaluations for FEBA tests are performed to check whether the experimental responses such as the cladding temperature or pressure drop are inside the limits of calculated uncertainty bounds. A confirmation step will be performed to evaluate the quality of the information in the case of the different reflooding PERICLES experiments. The uncertainty ranges of physical model in MARS-KS thermal-hydraulic code during the reflooding were quantified by CIRCÉ method using FEBA experiment tests, instead of expert judgment. Also, through the uncertainty evaluation for FEBA and PERICLES tests, it was confirmed
Climate Certainties and Uncertainties
International Nuclear Information System (INIS)
Morel, Pierre
2012-01-01
In issue 380 of Futuribles in December 2011, Antonin Pottier analysed in detail the workings of what is today termed 'climate scepticism' - namely the propensity of certain individuals to contest the reality of climate change on the basis of pseudo-scientific arguments. He emphasized particularly that what fuels the debate on climate change is, largely, the degree of uncertainty inherent in the consequences to be anticipated from observation of the facts, not the description of the facts itself. In his view, the main aim of climate sceptics is to block the political measures for combating climate change. However, since they do not admit to this political posture, they choose instead to deny the scientific reality. This month, Futuribles complements this socio-psychological analysis of climate-sceptical discourse with an - in this case, wholly scientific - analysis of what we know (or do not know) about climate change on our planet. Pierre Morel gives a detailed account of the state of our knowledge in the climate field and what we are able to predict in the medium/long-term. After reminding us of the influence of atmospheric meteorological processes on the climate, he specifies the extent of global warming observed since 1850 and the main origin of that warming, as revealed by the current state of knowledge: the increase in the concentration of greenhouse gases. He then describes the changes in meteorological regimes (showing also the limits of climate simulation models), the modifications of hydrological regimes, and also the prospects for rises in sea levels. He also specifies the mechanisms that may potentially amplify all these phenomena and the climate disasters that might ensue. Lastly, he shows what are the scientific data that cannot be disregarded, the consequences of which are now inescapable (melting of the ice-caps, rises in sea level etc.), the only remaining uncertainty in this connection being the date at which these things will happen. 'In this
Uncertainty in the inelastic resonant scattering assisted by phonons
International Nuclear Information System (INIS)
Garcia, N.; Garcia-Sanz, J.; Solana, J.
1977-01-01
We have analyzed the inelastic minima observed in new results of He atoms scattered from LiF(001) surfaces. This is done considering bound state resonance processes assisted by phonons. The analysis presents large uncertainties. In the range of uncertainty, we find two ''possible'' bands associated with the vibrations of F - and Li + , respectively. Many more experimental data are necessary to confirm the existence of these processes
Uncertainty in biology a computational modeling approach
Gomez-Cabrero, David
2016-01-01
Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies. Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process. This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples. This book is intended for graduate stude...
Uncertainty contributions to low flow projections in Austria
Parajka, J.; Blaschke, A. P.; Blöschl, G.; Haslinger, K.; Hepp, G.; Laaha, G.; Schöner, W.; Trautvetter, H.; Viglione, A.; Zessner, M.
2015-11-01
The main objective of the paper is to understand the contributions to the uncertainty in low flow projections resulting from hydrological model uncertainty and climate projection uncertainty. Model uncertainty is quantified by different parameterizations of a conceptual semi-distributed hydrologic model (TUWmodel) using 11 objective functions in three different decades (1976-1986, 1987-1997, 1998-2008), which allows disentangling the effect of modeling uncertainty and temporal stability of model parameters. Climate projection uncertainty is quantified by four future climate scenarios (ECHAM5-A1B, A2, B1 and HADCM3-A1B) using a delta change approach. The approach is tested for 262 basins in Austria. The results indicate that the seasonality of the low flow regime is an important factor affecting the performance of model calibration in the reference period and the uncertainty of Q95 low flow projections in the future period. In Austria, the calibration uncertainty in terms of Q95 is larger in basins with summer low flow regime than in basins with winter low flow regime. Using different calibration periods may result in a range of up to 60 % in simulated Q95 low flows. The low flow projections show an increase of low flows in the Alps, typically in the range of 10-30 % and a decrease in the south-eastern part of Austria mostly in the range -5 to -20 % for the period 2021-2050 relative the reference period 1976-2008. The change in seasonality varies between scenarios, but there is a tendency for earlier low flows in the Northern Alps and later low flows in Eastern Austria. In 85 % of the basins, the uncertainty in Q95 from model calibration is larger than the uncertainty from different climate scenarios. The total uncertainty of Q95 projections is the largest in basins with winter low flow regime and, in some basins, exceeds 60 %. In basins with summer low flows and the total uncertainty is mostly less than 20 %. While the calibration uncertainty dominates over climate
Sketching Uncertainty into Simulations.
Ribicic, H; Waser, J; Gurbat, R; Sadransky, B; Groller, M E
2012-12-01
In a variety of application areas, the use of simulation steering in decision making is limited at best. Research focusing on this problem suggests that most user interfaces are too complex for the end user. Our goal is to let users create and investigate multiple, alternative scenarios without the need for special simulation expertise. To simplify the specification of parameters, we move from a traditional manipulation of numbers to a sketch-based input approach. Users steer both numeric parameters and parameters with a spatial correspondence by sketching a change onto the rendering. Special visualizations provide immediate visual feedback on how the sketches are transformed into boundary conditions of the simulation models. Since uncertainty with respect to many intertwined parameters plays an important role in planning, we also allow the user to intuitively setup complete value ranges, which are then automatically transformed into ensemble simulations. The interface and the underlying system were developed in collaboration with experts in the field of flood management. The real-world data they have provided has allowed us to construct scenarios used to evaluate the system. These were presented to a variety of flood response personnel, and their feedback is discussed in detail in the paper. The interface was found to be intuitive and relevant, although a certain amount of training might be necessary.
Pandemic influenza: certain uncertainties
Morens, David M.; Taubenberger, Jeffery K.
2011-01-01
SUMMARY For at least five centuries, major epidemics and pandemics of influenza have occurred unexpectedly and at irregular intervals. Despite the modern notion that pandemic influenza is a distinct phenomenon obeying such constant (if incompletely understood) rules such as dramatic genetic change, cyclicity, “wave” patterning, virus replacement, and predictable epidemic behavior, much evidence suggests the opposite. Although there is much that we know about pandemic influenza, there appears to be much more that we do not know. Pandemics arise as a result of various genetic mechanisms, have no predictable patterns of mortality among different age groups, and vary greatly in how and when they arise and recur. Some are followed by new pandemics, whereas others fade gradually or abruptly into long-term endemicity. Human influenza pandemics have been caused by viruses that evolved singly or in co-circulation with other pandemic virus descendants and often have involved significant transmission between, or establishment of, viral reservoirs within other animal hosts. In recent decades, pandemic influenza has continued to produce numerous unanticipated events that expose fundamental gaps in scientific knowledge. Influenza pandemics appear to be not a single phenomenon but a heterogeneous collection of viral evolutionary events whose similarities are overshadowed by important differences, the determinants of which remain poorly understood. These uncertainties make it difficult to predict influenza pandemics and, therefore, to adequately plan to prevent them. PMID:21706672
Maugis, Pierre-André G
2018-07-01
Big data-the idea that an always-larger volume of information is being constantly recorded-suggests that new problems can now be subjected to scientific scrutiny. However, can classical statistical methods be used directly on big data? We analyze the problem by looking at two known pitfalls of big datasets. First, that they are biased, in the sense that they do not offer a complete view of the populations under consideration. Second, that they present a weak but pervasive level of dependence between all their components. In both cases we observe that the uncertainty of the conclusion obtained by statistical methods is increased when used on big data, either because of a systematic error (bias), or because of a larger degree of randomness (increased variance). We argue that the key challenge raised by big data is not only how to use big data to tackle new problems, but to develop tools and methods able to rigorously articulate the new risks therein. Copyright © 2016. Published by Elsevier Ltd.
Uncertainty enabled Sensor Observation Services
Cornford, Dan; Williams, Matthew; Bastin, Lucy
2010-05-01
Almost all observations of reality are contaminated with errors, which introduce uncertainties into the actual observation result. Such uncertainty is often held to be a data quality issue, and quantification of this uncertainty is essential for the principled exploitation of the observations. Many existing systems treat data quality in a relatively ad-hoc manner, however if the observation uncertainty is a reliable estimate of the error on the observation with respect to reality then knowledge of this uncertainty enables optimal exploitation of the observations in further processes, or decision making. We would argue that the most natural formalism for expressing uncertainty is Bayesian probability theory. In this work we show how the Open Geospatial Consortium Sensor Observation Service can be implemented to enable the support of explicit uncertainty about observations. We show how the UncertML candidate standard is used to provide a rich and flexible representation of uncertainty in this context. We illustrate this on a data set of user contributed weather data where the INTAMAP interpolation Web Processing Service is used to help estimate the uncertainty on the observations of unknown quality, using observations with known uncertainty properties. We then go on to discuss the implications of uncertainty for a range of existing Open Geospatial Consortium standards including SWE common and Observations and Measurements. We discuss the difficult decisions in the design of the UncertML schema and its relation and usage within existing standards and show various options. We conclude with some indications of the likely future directions for UncertML in the context of Open Geospatial Consortium services.
Learning about Measurement Uncertainties in Secondary Education: A Model of the Subject Matter
Priemer, Burkhard; Hellwig, Julia
2018-01-01
Estimating measurement uncertainties is important for experimental scientific work. However, this is very often neglected in school curricula and teaching practice, even though experimental work is seen as a fundamental part of teaching science. In order to call attention to the relevance of measurement uncertainties, we developed a comprehensive…
Strain gauge measurement uncertainties on hydraulic turbine runner blade
International Nuclear Information System (INIS)
Arpin-Pont, J; Gagnon, M; Tahan, S A; Coutu, A; Thibault, D
2012-01-01
Strains experimentally measured with strain gauges can differ from those evaluated using the Finite Element (FE) method. This difference is due mainly to the assumptions and uncertainties inherent to each method. To circumvent this difficulty, we developed a numerical method based on Monte Carlo simulations to evaluate measurement uncertainties produced by the behaviour of a unidirectional welded gauge, its position uncertainty and its integration effect. This numerical method uses the displacement fields of the studied part evaluated by an FE analysis. The paper presents a study case using in situ data measured on a hydraulic turbine runner. The FE analysis of the turbine runner blade was computed, and our numerical method used to evaluate uncertainties on strains measured at five locations with welded strain gauges. Then, measured strains and their uncertainty ranges are compared to the estimated strains. The uncertainty ranges obtained extended from 74 με to 165 με. Furthermore, the biases observed between the median of the uncertainty ranges and the FE strains varied from −36 to 36 με. Note that strain gauge measurement uncertainties depend mainly on displacement fields and gauge geometry.
Nuclear data uncertainties for local power densities in the Martin-Hoogenboom benchmark
International Nuclear Information System (INIS)
Van der Marck, S.C.; Rochman, D.A.
2013-01-01
The recently developed method of fast Total Monte Carlo to propagate nuclear data uncertainties was applied to the Martin-Hoogenboom benchmark. This Martin- Hoogenboom benchmark prescribes that one calculates local pin powers (of light water cooled reactor) with a statistical uncertainty lower than 1% everywhere. Here we report, for the first time, an estimate of the nuclear data uncertainties for these local pin powers. For each of the more than 6 million local power tallies, the uncertainty due to nuclear data uncertainties was calculated, based on random variation of data for 235 U, 238 U, 239 Pu and H in H 2 O thermal scattering. In the center of the core region, the nuclear data uncertainty is 0.9%. Towards the edges of the core, this uncertainty increases to roughly 3%. The nuclear data uncertainties have been shown to be larger than the statistical uncertainties that the benchmark prescribes
The use of error and uncertainty methods in the medical laboratory.
Oosterhuis, Wytze P; Bayat, Hassan; Armbruster, David; Coskun, Abdurrahman; Freeman, Kathleen P; Kallner, Anders; Koch, David; Mackenzie, Finlay; Migliarino, Gabriel; Orth, Matthias; Sandberg, Sverre; Sylte, Marit S; Westgard, Sten; Theodorsson, Elvar
2018-01-26
Error methods - compared with uncertainty methods - offer simpler, more intuitive and practical procedures for calculating measurement uncertainty and conducting quality assurance in laboratory medicine. However, uncertainty methods are preferred in other fields of science as reflected by the guide to the expression of uncertainty in measurement. When laboratory results are used for supporting medical diagnoses, the total uncertainty consists only partially of analytical variation. Biological variation, pre- and postanalytical variation all need to be included. Furthermore, all components of the measuring procedure need to be taken into account. Performance specifications for diagnostic tests should include the diagnostic uncertainty of the entire testing process. Uncertainty methods may be particularly useful for this purpose but have yet to show their strength in laboratory medicine. The purpose of this paper is to elucidate the pros and cons of error and uncertainty methods as groundwork for future consensus on their use in practical performance specifications. Error and uncertainty methods are complementary when evaluating measurement data.
Maximizing probable oil field profit: uncertainties on well spacing
International Nuclear Information System (INIS)
MacKay, J.A.; Lerche, I.
1997-01-01
The influence of uncertainties in field development costs, well costs, lifting costs, selling price, discount factor, and oil field reserves are evaluated for their impact on assessing probable ranges of uncertainty on present day worth (PDW), oil field lifetime τ 2/3 , optimum number of wells (OWI), and the minimum (n-) and maximum (n+) number of wells to produce a PDW ≥ O. The relative importance of different factors in contributing to the uncertainties in PDW, τ 2/3 , OWI, nsub(-) and nsub(+) is also analyzed. Numerical illustrations indicate how the maximum PDW depends on the ranges of parameter values, drawn from probability distributions using Monte Carlo simulations. In addition, the procedure illustrates the relative importance of contributions of individual factors to the total uncertainty, so that one can assess where to place effort to improve ranges of uncertainty; while the volatility of each estimate allows one to determine when such effort is needful. (author)
Supporting Qualified Database for Uncertainty Evaluation
International Nuclear Information System (INIS)
Petruzzi, A.; Fiori, F.; Kovtonyuk, A.; Lisovyy, O.; D'Auria, F.
2013-01-01
Uncertainty evaluation constitutes a key feature of BEPU (Best Estimate Plus Uncertainty) process. The uncertainty can be the result of a Monte Carlo type analysis involving input uncertainty parameters or the outcome of a process involving the use of experimental data and connected code calculations. Those uncertainty methods are discussed in several papers and guidelines (IAEA-SRS-52, OECD/NEA BEMUSE reports). The present paper aims at discussing the role and the depth of the analysis required for merging from one side suitable experimental data and on the other side qualified code calculation results. This aspect is mostly connected with the second approach for uncertainty mentioned above, but it can be used also in the framework of the first approach. Namely, the paper discusses the features and structure of the database that includes the following kinds of documents: 1. The 'RDS-facility' (Reference Data Set for the selected facility): this includes the description of the facility, the geometrical characterization of any component of the facility, the instrumentations, the data acquisition system, the evaluation of pressure losses, the physical properties of the material and the characterization of pumps, valves and heat losses; 2. The 'RDS-test' (Reference Data Set for the selected test of the facility): this includes the description of the main phenomena investigated during the test, the configuration of the facility for the selected test (possible new evaluation of pressure and heat losses if needed) and the specific boundary and initial conditions; 3. The 'QP' (Qualification Report) of the code calculation results: this includes the description of the nodalization developed following a set of homogeneous techniques, the achievement of the steady state conditions and the qualitative and quantitative analysis of the transient with the characterization of the Relevant Thermal-Hydraulics Aspects (RTA); 4. The EH (Engineering Handbook) of the input nodalization
Supporting qualified database for uncertainty evaluation
Energy Technology Data Exchange (ETDEWEB)
Petruzzi, A.; Fiori, F.; Kovtonyuk, A.; D' Auria, F. [Nuclear Research Group of San Piero A Grado, Univ. of Pisa, Via Livornese 1291, 56122 Pisa (Italy)
2012-07-01
Uncertainty evaluation constitutes a key feature of BEPU (Best Estimate Plus Uncertainty) process. The uncertainty can be the result of a Monte Carlo type analysis involving input uncertainty parameters or the outcome of a process involving the use of experimental data and connected code calculations. Those uncertainty methods are discussed in several papers and guidelines (IAEA-SRS-52, OECD/NEA BEMUSE reports). The present paper aims at discussing the role and the depth of the analysis required for merging from one side suitable experimental data and on the other side qualified code calculation results. This aspect is mostly connected with the second approach for uncertainty mentioned above, but it can be used also in the framework of the first approach. Namely, the paper discusses the features and structure of the database that includes the following kinds of documents: 1. The' RDS-facility' (Reference Data Set for the selected facility): this includes the description of the facility, the geometrical characterization of any component of the facility, the instrumentations, the data acquisition system, the evaluation of pressure losses, the physical properties of the material and the characterization of pumps, valves and heat losses; 2. The 'RDS-test' (Reference Data Set for the selected test of the facility): this includes the description of the main phenomena investigated during the test, the configuration of the facility for the selected test (possible new evaluation of pressure and heat losses if needed) and the specific boundary and initial conditions; 3. The 'QR' (Qualification Report) of the code calculation results: this includes the description of the nodalization developed following a set of homogeneous techniques, the achievement of the steady state conditions and the qualitative and quantitative analysis of the transient with the characterization of the Relevant Thermal-Hydraulics Aspects (RTA); 4. The EH (Engineering
Ali, E. S. M.; Spencer, B.; McEwen, M. R.; Rogers, D. W. O.
2015-02-01
In this study, a quantitative estimate is derived for the uncertainty in the XCOM photon mass attenuation coefficients in the energy range of interest to external beam radiation therapy—i.e. 100 keV (orthovoltage) to 25 MeV—using direct comparisons of experimental data against Monte Carlo models and theoretical XCOM data. Two independent datasets are used. The first dataset is from our recent transmission measurements and the corresponding EGSnrc calculations (Ali et al 2012 Med. Phys. 39 5990-6003) for 10-30 MV photon beams from the research linac at the National Research Council Canada. The attenuators are graphite and lead, with a total of 140 data points and an experimental uncertainty of ˜0.5% (k = 1). An optimum energy-independent cross section scaling factor that minimizes the discrepancies between measurements and calculations is used to deduce cross section uncertainty. The second dataset is from the aggregate of cross section measurements in the literature for graphite and lead (49 experiments, 288 data points). The dataset is compared to the sum of the XCOM data plus the IAEA photonuclear data. Again, an optimum energy-independent cross section scaling factor is used to deduce the cross section uncertainty. Using the average result from the two datasets, the energy-independent cross section uncertainty estimate is 0.5% (68% confidence) and 0.7% (95% confidence). The potential for energy-dependent errors is discussed. Photon cross section uncertainty is shown to be smaller than the current qualitative ‘envelope of uncertainty’ of the order of 1-2%, as given by Hubbell (1999 Phys. Med. Biol 44 R1-22).
International Nuclear Information System (INIS)
Pendleton, Ph.; Badalyan, A.
2005-01-01
Activated carbon cloth (ACC) is a good adsorbent for high rate adsorption of volatile organic carbons [1] and as a storage media for methane [2]. It has been shown [2] that the capacity of ACC to adsorb methane, in the first instance, depends on its micropore volume. One way of increasing this storage capacity is to increase micropore volume [3]. Therefore, the uncertainty in the determination of ACC micropore volume becomes a very important factor, since it affects the uncertainty of amount adsorbed at high-pressures, which usually accompany storage of methane on ACC. Recently, we developed a method for the calculation of experimental uncertainty in micropore volume using low pressure nitrogen adsorption data at 77 K for FM1/250 ACC (ex. Calgon, USA). We tested several cubic equations of state (EOS) and multiple parameter (EOS) to determine the amount of high-pressure nitrogen adsorbed, and compared these data with amounts calculated via interpolated NIST density data. The amount adsorbed calculated from interpolated NIST density data exhibit the lowest propagated combined uncertainty. Values of relative combined standard uncertainty for FM1/250 calculated using a weighted, mean-least-squares method applied to the low-pressure nitrogen adsorption data (Fig. 1) gave 3.52% for the primary micropore volume and 1.63% for the total micropore volume. Our equipment allows the same sample to be exposed to nitrogen (and other gases) at pressures from 10 -4 Pa to 17-MPa in the temperature range from 176 to 252 K. The maximum uptake of nitrogen was 356-mmol/g at 201.92 K and 15.8-MPa (Fig. 2). The delivery capacity of ACC is determined by the amount of adsorbed gas recovered when the pressure is reduced from that for maximum adsorption to 0.1-MPa [2]. In this regard, the total micropore volume becomes an important parameter in determining the amount of gas delivered during desorption. In the present paper we will discuss the effect of uncertainty in micropore volume
A commentary on model uncertainty
International Nuclear Information System (INIS)
Apostolakis, G.
1994-01-01
A framework is proposed for the identification of model and parameter uncertainties in risk assessment models. Two cases are distinguished; in the first case, a set of mutually exclusive and exhaustive hypotheses (models) can be formulated, while, in the second, only one reference model is available. The relevance of this formulation to decision making and the communication of uncertainties is discussed
Mama Software Features: Uncertainty Testing
Energy Technology Data Exchange (ETDEWEB)
Ruggiero, Christy E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Porter, Reid B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-05-30
This document reviews how the uncertainty in the calculations is being determined with test image data. The results of this testing give an ‘initial uncertainty’ number than can be used to estimate the ‘back end’ uncertainty in digital image quantification in images. Statisticians are refining these numbers as part of a UQ effort.
Designing for Uncertainty: Three Approaches
Bennett, Scott
2007-01-01
Higher education wishes to get long life and good returns on its investment in learning spaces. Doing this has become difficult because rapid changes in information technology have created fundamental uncertainties about the future in which capital investments must deliver value. Three approaches to designing for this uncertainty are described…
Understanding and reducing statistical uncertainties in nebular abundance determinations
Wesson, R.; Stock, D. J.; Scicluna, P.
2012-06-01
Whenever observations are compared to theories, an estimate of the uncertainties associated with the observations is vital if the comparison is to be meaningful. However, many or even most determinations of temperatures, densities and abundances in photoionized nebulae do not quote the associated uncertainty. Those that do typically propagate the uncertainties using analytical techniques which rely on assumptions that generally do not hold. Motivated by this issue, we have developed Nebular Empirical Analysis Tool (NEAT), a new code for calculating chemical abundances in photoionized nebulae. The code carries out a standard analysis of lists of emission lines using long-established techniques to estimate the amount of interstellar extinction, calculate representative temperatures and densities, compute ionic abundances from both collisionally excited lines and recombination lines, and finally to estimate total elemental abundances using an ionization correction scheme. NEATuses a Monte Carlo technique to robustly propagate uncertainties from line flux measurements through to the derived abundances. We show that, for typical observational data, this approach is superior to analytic estimates of uncertainties. NEAT also accounts for the effect of upward biasing on measurements of lines with low signal-to-noise ratio, allowing us to accurately quantify the effect of this bias on abundance determinations. We find not only that the effect can result in significant overestimates of heavy element abundances derived from weak lines, but also that taking it into account reduces the uncertainty of these abundance determinations. Finally, we investigate the effect of possible uncertainties in R, the ratio of selective-to-total extinction, on abundance determinations. We find that the uncertainty due to this parameter is negligible compared to the statistical uncertainties due to typical line flux measurement uncertainties.
International Nuclear Information System (INIS)
Silva, Renan A.; Martins, Elaine A.J.; Furusawa, Helio A.
2011-01-01
The validation of analytical methods has become an indispensable tool for the analysis in chemical laboratories, including being required for such accreditation. However, even if a laboratory using validated methods of analysis there is the possibility that these methods generate results discrepant with reality by making necessary the addition of a quantitative attribute (a value) which indicates the degree of certainty the extent or the analytical method used. This measure assigned to the result of measurement is called measurement uncertainty. We estimate this uncertainty with a level of confidence both direction, an analytical result has limited significance if not carried out proper assessment of its uncertainty. One of the activities of this work was to elaborate a program to help the validation and evaluation of uncertainty in chemical analysis. The program was developed with Visual Basic programming language and method of evaluation of uncertainty introduced the following concepts based on the GUM (Guide to the Expression of Uncertainty in Measurement). This evaluation program uncertainty measurement will be applied to chemical analysis in support of the characterization of the Nuclear Fuel Cycle developed by IPEN and the study of organic substances in wastewater associated with professional activities of the Institute. In the first case, primarily for the determination of total uranium and the second case for substances that were generated by human activities and that are contained in resolution 357/2005. As strategy for development of this work was considered the PDCA cycle to improve the efficiency of each step and minimize errors while performing the experimental part. The program should be validated to meet requirements of standards such as, for example, the standard ISO/IEC 17025. The application, it is projected to use in other analytical procedures of both the Nuclear Fuel Cycle and in the control program and chemical waste management of IPEN
Energy Technology Data Exchange (ETDEWEB)
Silva, Renan A.; Martins, Elaine A.J.; Furusawa, Helio A., E-mail: elaine@ipen.br, E-mail: helioaf@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2011-07-01
The validation of analytical methods has become an indispensable tool for the analysis in chemical laboratories, including being required for such accreditation. However, even if a laboratory using validated methods of analysis there is the possibility that these methods generate results discrepant with reality by making necessary the addition of a quantitative attribute (a value) which indicates the degree of certainty the extent or the analytical method used. This measure assigned to the result of measurement is called measurement uncertainty. We estimate this uncertainty with a level of confidence both direction, an analytical result has limited significance if not carried out proper assessment of its uncertainty. One of the activities of this work was to elaborate a program to help the validation and evaluation of uncertainty in chemical analysis. The program was developed with Visual Basic programming language and method of evaluation of uncertainty introduced the following concepts based on the GUM (Guide to the Expression of Uncertainty in Measurement). This evaluation program uncertainty measurement will be applied to chemical analysis in support of the characterization of the Nuclear Fuel Cycle developed by IPEN and the study of organic substances in wastewater associated with professional activities of the Institute. In the first case, primarily for the determination of total uranium and the second case for substances that were generated by human activities and that are contained in resolution 357/2005. As strategy for development of this work was considered the PDCA cycle to improve the efficiency of each step and minimize errors while performing the experimental part. The program should be validated to meet requirements of standards such as, for example, the standard ISO/IEC 17025. The application, it is projected to use in other analytical procedures of both the Nuclear Fuel Cycle and in the control program and chemical waste management of IPEN
Quantifying uncertainties in precipitation: a case study from Greece
Directory of Open Access Journals (Sweden)
C. Anagnostopoulou
2008-04-01
Full Text Available The main objective of the present study was the examination and the quantification of the uncertainties in the precipitation time series over the Greek area, for a 42-year time period. The uncertainty index applied to the rainfall data is a combination (total of the departures of the rainfall season length, of the median data of the accumulated percentages and of the total amounts of rainfall. Results of the study indicated that all the stations are characterized, on an average basis, by medium to high uncertainty. The stations that presented an increasing rainfall uncertainty were the ones located mainly to the continental parts of the study region. From the temporal analysis of the uncertainty index, it was demonstrated that the greatest percentage of the years, for all the stations time-series, was characterized by low to high uncertainty (intermediate categories of the index. Most of the results of the uncertainty index for the Greek region are similar to the corresponding results of various stations all over the European region.
Account of the uncertainty factor in forecasting nuclear power development
International Nuclear Information System (INIS)
Chernavskij, S.Ya.
1979-01-01
Minimization of total discounted costs for linear constraints is commonly used in forecasting nuclear energy growth. This approach is considered inadequate due to the uncertainty of exogenous variables of the model. A method of forecasting that takes into account the presence of uncertainty is elaborated. An example that demonstrates the expediency of the method and its advantage over the conventional approximation method used for taking uncertainty into account is given. In the framework of the example, the optimal strategy for nuclear energy growth over period of 500 years is determined
Sensitivity and uncertainty analysis of NET/ITER shielding blankets
International Nuclear Information System (INIS)
Hogenbirk, A.; Gruppelaar, H.; Verschuur, K.A.
1990-09-01
Results are presented of sensitivity and uncertainty calculations based upon the European fusion file (EFF-1). The effect of uncertainties in Fe, Cr and Ni cross sections on the nuclear heating in the coils of a NET/ITER shielding blanket has been studied. The analysis has been performed for the total cross section as well as partial cross sections. The correct expression for the sensitivity profile was used, including the gain term. The resulting uncertainty in the nuclear heating lies between 10 and 20 per cent. (author). 18 refs.; 2 figs.; 2 tabs
Risk, Uncertainty, and Entrepreneurship
DEFF Research Database (Denmark)
Koudstaal, Martin; Sloof, Randolph; Van Praag, Mirjam
2016-01-01
21288). The results indicate that entrepreneurs perceive themselves as less risk averse than managers and employees, in line with common wisdom. However, when using experimental incentivized measures, the differences are subtler. Entrepreneurs are only found to be unique in their lower degree of loss...... aversion, and not in their risk or ambiguity aversion. This combination of results might be explained by our finding that perceived risk attitude is not only correlated to risk aversion but also to loss aversion. Overall, we therefore suggest using a broader definition of risk that captures this unique...... feature of entrepreneurs: their willingness to risk losses....
Risk, Uncertainty and Entrepreneurship
DEFF Research Database (Denmark)
Koudstaal, Martin; Sloof, Randolph; Van Praag, Mirjam
. Entrepreneurs are only found to be unique in their lower degree of loss aversion, and not in their risk or ambiguity aversion. This combination of results might be explained by our finding that perceived risk attitude is not only correlated to risk aversion but also to loss aversion. Overall, we therefore...... entrepreneurs to managers – a suitable comparison group – and employees (n = 2288). The results indicate that entrepreneurs perceive themselves as less risk averse than managers and employees, in line with common wisdom. However, when using experimental incentivized measures, the differences are subtler...... suggest using a broader definition of risk that captures this unique feature of entrepreneurs; their willingness to risk losses....
ESFR core optimization and uncertainty studies
International Nuclear Information System (INIS)
Rineiski, A.; Vezzoni, B.; Zhang, D.; Marchetti, M.; Gabrielli, F.; Maschek, W.; Chen, X.-N.; Buiron, L.; Krepel, J.; Sun, K.; Mikityuk, K.; Polidoro, F.; Rochman, D.; Koning, A.J.; DaCruz, D.F.; Tsige-Tamirat, H.; Sunderland, R.
2015-01-01
In the European Sodium Fast Reactor (ESFR) project supported by EURATOM in 2008-2012, a concept for a large 3600 MWth sodium-cooled fast reactor design was investigated. In particular, reference core designs with oxide and carbide fuel were optimized to improve their safety parameters. Uncertainties in these parameters were evaluated for the oxide option. Core modifications were performed first to reduce the sodium void reactivity effect. Introduction of a large sodium plenum with an absorber layer above the core and a lower axial fertile blanket improve the total sodium void effect appreciably, bringing it close to zero for a core with fresh fuel, in line with results obtained worldwide, while not influencing substantially other core physics parameters. Therefore an optimized configuration, CONF2, with a sodium plenum and a lower blanket was established first and used as a basis for further studies in view of deterioration of safety parameters during reactor operation. Further options to study were an inner fertile blanket, introduction of moderator pins, a smaller core height, special designs for pins, such as 'empty' pins, and subassemblies. These special designs were proposed to facilitate melted fuel relocation in order to avoid core re-criticality under severe accident conditions. In the paper further CONF2 modifications are compared in terms of safety and fuel balance. They may bring further improvements in safety, but their accurate assessment requires additional studies, including transient analyses. Uncertainty studies were performed by employing a so-called Total Monte-Carlo method, for which a large number of nuclear data files is produced for single isotopes and then used in Monte-Carlo calculations. The uncertainties for the criticality, sodium void and Doppler effects, effective delayed neutron fraction due to uncertainties in basic nuclear data were assessed for an ESFR core. They prove applicability of the available nuclear data for ESFR
Uncertainty analysis of thermal quantities measurement in a centrifugal compressor
Hurda, Lukáš; Matas, Richard
2017-09-01
Compressor performance characteristics evaluation process based on the measurement of pressure, temperature and other quantities is examined to find uncertainties for directly measured and derived quantities. CFD is used as a tool to quantify the influences of different sources of uncertainty of measurements for single- and multi-thermocouple total temperature probes. The heat conduction through the body of the thermocouple probe and the heat-up of the air in the intake piping are the main phenomena of interest.
Uncertainties in Nuclear Proliferation Modeling
International Nuclear Information System (INIS)
Kim, Chul Min; Yim, Man-Sung; Park, Hyeon Seok
2015-01-01
There have been various efforts in the research community to understand the determinants of nuclear proliferation and develop quantitative tools to predict nuclear proliferation events. Such systematic approaches have shown the possibility to provide warning for the international community to prevent nuclear proliferation activities. However, there are still large debates for the robustness of the actual effect of determinants and projection results. Some studies have shown that several factors can cause uncertainties in previous quantitative nuclear proliferation modeling works. This paper analyzes the uncertainties in the past approaches and suggests future works in the view of proliferation history, analysis methods, and variable selection. The research community still lacks the knowledge for the source of uncertainty in current models. Fundamental problems in modeling will remain even other advanced modeling method is developed. Before starting to develop fancy model based on the time dependent proliferation determinants' hypothesis, using graph theory, etc., it is important to analyze the uncertainty of current model to solve the fundamental problems of nuclear proliferation modeling. The uncertainty from different proliferation history coding is small. Serious problems are from limited analysis methods and correlation among the variables. Problems in regression analysis and survival analysis cause huge uncertainties when using the same dataset, which decreases the robustness of the result. Inaccurate variables for nuclear proliferation also increase the uncertainty. To overcome these problems, further quantitative research should focus on analyzing the knowledge suggested on the qualitative nuclear proliferation studies
Measurement uncertainty: Friend or foe?
Infusino, Ilenia; Panteghini, Mauro
2018-02-02
The definition and enforcement of a reference measurement system, based on the implementation of metrological traceability of patients' results to higher order reference methods and materials, together with a clinically acceptable level of measurement uncertainty, are fundamental requirements to produce accurate and equivalent laboratory results. The uncertainty associated with each step of the traceability chain should be governed to obtain a final combined uncertainty on clinical samples fulfilling the requested performance specifications. It is important that end-users (i.e., clinical laboratory) may know and verify how in vitro diagnostics (IVD) manufacturers have implemented the traceability of their calibrators and estimated the corresponding uncertainty. However, full information about traceability and combined uncertainty of calibrators is currently very difficult to obtain. Laboratory professionals should investigate the need to reduce the uncertainty of the higher order metrological references and/or to increase the precision of commercial measuring systems. Accordingly, the measurement uncertainty should not be considered a parameter to be calculated by clinical laboratories just to fulfil the accreditation standards, but it must become a key quality indicator to describe both the performance of an IVD measuring system and the laboratory itself. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Nuclear data sensitivity/uncertainty analysis for XT-ADS
International Nuclear Information System (INIS)
Sugawara, Takanori; Sarotto, Massimo; Stankovskiy, Alexey; Van den Eynde, Gert
2011-01-01
Highlights: → The sensitivity and uncertainty analyses were performed to comprehend the reliability of the XT-ADS neutronic design. → The uncertainties deduced from the covariance data for the XT-ADS criticality were 0.94%, 1.9% and 1.1% by the SCALE 44-group, TENDL-2009 and JENDL-3.3 data, respectively. → When the target accuracy of 0.3%Δk for the criticality was considered, the uncertainties did not satisfy it. → To achieve this accuracy, the uncertainties should be improved by experiments under an adequate condition. - Abstract: The XT-ADS, an accelerator-driven system for an experimental demonstration, has been investigated in the framework of IP EUROTRANS FP6 project. In this study, the sensitivity and uncertainty analyses were performed to comprehend the reliability of the XT-ADS neutronic design. For the sensitivity analysis, it was found that the sensitivity coefficients were significantly different by changing the geometry models and calculation codes. For the uncertainty analysis, it was confirmed that the uncertainties deduced from the covariance data varied significantly by changing them. The uncertainties deduced from the covariance data for the XT-ADS criticality were 0.94%, 1.9% and 1.1% by the SCALE 44-group, TENDL-2009 and JENDL-3.3 data, respectively. When the target accuracy of 0.3%Δk for the criticality was considered, the uncertainties did not satisfy it. To achieve this accuracy, the uncertainties should be improved by experiments under an adequate condition.
Micropollutants throughout an integrated urban drainage model: Sensitivity and uncertainty analysis
Mannina, Giorgio; Cosenza, Alida; Viviani, Gaspare
2017-11-01
The paper presents the sensitivity and uncertainty analysis of an integrated urban drainage model which includes micropollutants. Specifically, a bespoke integrated model developed in previous studies has been modified in order to include the micropollutant assessment (namely, sulfamethoxazole - SMX). The model takes into account also the interactions between the three components of the system: sewer system (SS), wastewater treatment plant (WWTP) and receiving water body (RWB). The analysis has been applied to an experimental catchment nearby Palermo (Italy): the Nocella catchment. Overall, five scenarios, each characterized by different uncertainty combinations of sub-systems (i.e., SS, WWTP and RWB), have been considered applying, for the sensitivity analysis, the Extended-FAST method in order to select the key factors affecting the RWB quality and to design a reliable/useful experimental campaign. Results have demonstrated that sensitivity analysis is a powerful tool for increasing operator confidence in the modelling results. The approach adopted here can be used for blocking some non-identifiable factors, thus wisely modifying the structure of the model and reducing the related uncertainty. The model factors related to the SS have been found to be the most relevant factors affecting the SMX modeling in the RWB when all model factors (scenario 1) or model factors of SS (scenarios 2 and 3) are varied. If the only factors related to the WWTP are changed (scenarios 4 and 5), the SMX concentration in the RWB is mainly influenced (till to 95% influence of the total variance for SSMX,max) by the aerobic sorption coefficient. A progressive uncertainty reduction from the upstream to downstream was found for the soluble fraction of SMX in the RWB.
Raben, Jaime S; Hariharan, Prasanna; Robinson, Ronald; Malinauskas, Richard; Vlachos, Pavlos P
2016-03-01
We present advanced particle image velocimetry (PIV) processing, post-processing, and uncertainty estimation techniques to support the validation of computational fluid dynamics analyses of medical devices. This work is an extension of a previous FDA-sponsored multi-laboratory study, which used a medical device mimicking geometry referred to as the FDA benchmark nozzle model. Experimental measurements were performed using time-resolved PIV at five overlapping regions of the model for Reynolds numbers in the nozzle throat of 500, 2000, 5000, and 8000. Images included a twofold increase in spatial resolution in comparison to the previous study. Data was processed using ensemble correlation, dynamic range enhancement, and phase correlations to increase signal-to-noise ratios and measurement accuracy, and to resolve flow regions with large velocity ranges and gradients, which is typical of many blood-contacting medical devices. Parameters relevant to device safety, including shear stress at the wall and in bulk flow, were computed using radial basis functions. In addition, in-field spatially resolved pressure distributions, Reynolds stresses, and energy dissipation rates were computed from PIV measurements. Velocity measurement uncertainty was estimated directly from the PIV correlation plane, and uncertainty analysis for wall shear stress at each measurement location was performed using a Monte Carlo model. Local velocity uncertainty varied greatly and depended largely on local conditions such as particle seeding, velocity gradients, and particle displacements. Uncertainty in low velocity regions in the sudden expansion section of the nozzle was greatly reduced by over an order of magnitude when dynamic range enhancement was applied. Wall shear stress uncertainty was dominated by uncertainty contributions from velocity estimations, which were shown to account for 90-99% of the total uncertainty. This study provides advancements in the PIV processing methodologies over
Model uncertainty: Probabilities for models?
International Nuclear Information System (INIS)
Winkler, R.L.
1994-01-01
Like any other type of uncertainty, model uncertainty should be treated in terms of probabilities. The question is how to do this. The most commonly-used approach has a drawback related to the interpretation of the probabilities assigned to the models. If we step back and look at the big picture, asking what the appropriate focus of the model uncertainty question should be in the context of risk and decision analysis, we see that a different probabilistic approach makes more sense, although it raise some implementation questions. Current work that is underway to address these questions looks very promising
Extensive neutronic sensitivity-uncertainty analysis of a fusion reactor shielding blanket
International Nuclear Information System (INIS)
Hogenbirk, A.
1994-01-01
In this paper the results are presented of an extensive neutronic sensitivity-uncertainty study performed for the design of a shielding blanket for a next-step fusion reactor, such as ITER. A code system was used, which was developed at ECN Petten. The uncertainty in an important response parameter, the neutron heating in the inboard superconducting coils, was evaluated. Neutron transport calculations in the 100 neutron group GAM-II structure were performed using the code ANISN. For the sensitivity and uncertainty calculations the code SUSD was used. Uncertainties due to cross-section uncertainties were taken into account as well as uncertainties due to uncertainties in energy and angular distributions of scattered neutrons (SED and SAD uncertainties, respectively). The subject of direct-term uncertainties (i.e. uncertainties due to uncertainties in the kerma factors of the superconducting coils) is briefly touched upon. It is shown that SAD uncertainties, which have been largely neglected until now, contribute significantly to the total uncertainty. Moreover, the contribution of direct-term uncertainties may be large. The total uncertainty in the neutron heating, only due to Fe cross-sections, amounts to approximately 25%, which is rather large. However, uncertainty data are scarce and the data may very well be conservative. It is shown in this paper that with the code system used, sensitivity and uncertainty calculations can be performed in a straightforward way. Therefore, it is suggested that emphasis is now put on the generation of realistic, reliable covariance data for cross-sections as well as for angular and energy distributions. ((orig.))
Decision-making under great uncertainty
International Nuclear Information System (INIS)
Hansson, S.O.
1992-01-01
Five types of decision-uncertainty are distinguished: uncertainty of consequences, of values, of demarcation, of reliance, and of co-ordination. Strategies are proposed for each type of uncertainty. The general conclusion is that it is meaningful for decision theory to treat cases with greater uncertainty than the textbook case of 'decision-making under uncertainty'. (au)
International Nuclear Information System (INIS)
Arpaia, Pasquale; De Vito, Luca; Kazazi, Mario
2016-01-01
In the uncertainty assessment of magnetic flux measurements in axially symmetric magnets by the translating coil method, the Guide to the Uncertainty in Measurement and its supplement cannot be applied: the voltage variation at the coil terminals, which is the actual measured quantity, affects the flux estimate and its uncertainty. In this paper, a particle filter, implementing a sequential Monte-Carlo method based on Bayesian inference, is applied. At this aim, the main uncertainty sources are analyzed and a model of the measurement process is defined. The results of the experimental validation point out the transport system and the acquisition system as the main contributions to the uncertainty budget. (authors)
Subspace-based Inverse Uncertainty Quantification for Nuclear Data Assessment
Energy Technology Data Exchange (ETDEWEB)
Khuwaileh, B.A., E-mail: bakhuwai@ncsu.edu; Abdel-Khalik, H.S.
2015-01-15
Safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. An inverse problem can be defined and solved to assess the sources of uncertainty, and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this work a subspace-based algorithm for inverse sensitivity/uncertainty quantification (IS/UQ) has been developed to enable analysts account for all sources of nuclear data uncertainties in support of target accuracy assessment-type analysis. An approximate analytical solution of the optimization problem is used to guide the search for the dominant uncertainty subspace. By limiting the search to a subspace, the degrees of freedom available for the optimization search are significantly reduced. A quarter PWR fuel assembly is modeled and the accuracy of the multiplication factor and the fission reaction rate are used as reactor attributes whose uncertainties are to be reduced. Numerical experiments are used to demonstrate the computational efficiency of the proposed algorithm. Our ongoing work is focusing on extending the proposed algorithm to account for various forms of feedback, e.g., thermal-hydraulics and depletion effects.
The Uncertainties of Risk Management
DEFF Research Database (Denmark)
Vinnari, Eija; Skærbæk, Peter
2014-01-01
for expanding risk management. More generally, such uncertainties relate to the professional identities and responsibilities of operational managers as defined by the framing devices. Originality/value – The paper offers three contributions to the extant literature: first, it shows how risk management itself......Purpose – The purpose of this paper is to analyse the implementation of risk management as a tool for internal audit activities, focusing on unexpected effects or uncertainties generated during its application. Design/methodology/approach – Public and confidential documents as well as semi......-structured interviews are analysed through the lens of actor-network theory to identify the effects of risk management devices in a Finnish municipality. Findings – The authors found that risk management, rather than reducing uncertainty, itself created unexpected uncertainties that would otherwise not have emerged...
Climate Projections and Uncertainty Communication.
Joslyn, Susan L; LeClerc, Jared E
2016-01-01
Lingering skepticism about climate change might be due in part to the way climate projections are perceived by members of the public. Variability between scientists' estimates might give the impression that scientists disagree about the fact of climate change rather than about details concerning the extent or timing. Providing uncertainty estimates might clarify that the variability is due in part to quantifiable uncertainty inherent in the prediction process, thereby increasing people's trust in climate projections. This hypothesis was tested in two experiments. Results suggest that including uncertainty estimates along with climate projections leads to an increase in participants' trust in the information. Analyses explored the roles of time, place, demographic differences (e.g., age, gender, education level, political party affiliation), and initial belief in climate change. Implications are discussed in terms of the potential benefit of adding uncertainty estimates to public climate projections. Copyright © 2015 Cognitive Science Society, Inc.
Relational uncertainty in service dyads
DEFF Research Database (Denmark)
Kreye, Melanie
2017-01-01
in service dyads and how they resolve it through suitable organisational responses to increase the level of service quality. Design/methodology/approach: We apply the overall logic of Organisational Information-Processing Theory (OIPT) and present empirical insights from two industrial case studies collected...... the relational uncertainty increased the functional quality while resolving the partner’s organisational uncertainty increased the technical quality of the delivered service. Originality: We make two contributions. First, we introduce relational uncertainty to the OM literature as the inability to predict...... and explain the actions of a partnering organisation due to a lack of knowledge about their abilities and intentions. Second, we present suitable organisational responses to relational uncertainty and their effect on service quality....
Advanced LOCA code uncertainty assessment
International Nuclear Information System (INIS)
Wickett, A.J.; Neill, A.P.
1990-11-01
This report describes a pilot study that identified, quantified and combined uncertainties for the LOBI BL-02 3% small break test. A ''dials'' version of TRAC-PF1/MOD1, called TRAC-F, was used. (author)
Expanded and combined uncertainty in measurements by GM counters
International Nuclear Information System (INIS)
Stankovic, K.; Arandjic, D.; Lazarevic, Dj.; Osmokrovic, P.
2007-01-01
This paper deals with possible ways of obtaining expanded and combined uncertainty in measurements for four types of GM counters with a same counter's tube, in cases when the contributors of these uncertainties are cosmic background radiation and induced overvoltage phenomena. Nowadays, as a consequence of electromagnetic radiation, the latter phenomenon is especially marked in urban environments. Based on experimental results obtained, it has been established that the uncertainties of an influenced random variable 'number of pulses from background radiation' and 'number of pulses induced by overvoltage' depend on the technological solution of the counter's reading system and contribute in different ways to the expanded and combined uncertainty in measurements of the applied types of GM counters. (author)
A Bayesian approach for quantification of model uncertainty
International Nuclear Information System (INIS)
Park, Inseok; Amarchinta, Hemanth K.; Grandhi, Ramana V.
2010-01-01
In most engineering problems, more than one model can be created to represent an engineering system's behavior. Uncertainty is inevitably involved in selecting the best model from among the models that are possible. Uncertainty in model selection cannot be ignored, especially when the differences between the predictions of competing models are significant. In this research, a methodology is proposed to quantify model uncertainty using measured differences between experimental data and model outcomes under a Bayesian statistical framework. The adjustment factor approach is used to propagate model uncertainty into prediction of a system response. A nonlinear vibration system is used to demonstrate the processes for implementing the adjustment factor approach. Finally, the methodology is applied on the engineering benefits of a laser peening process, and a confidence band for residual stresses is established to indicate the reliability of model prediction.
Recent experimental results on level densities for compound reaction calculations
International Nuclear Information System (INIS)
Voinov, A.V.
2012-01-01
There is a problem related to the choice of the level density input for Hauser-Feshbach model calculations. Modern computer codes have several options to choose from but it is not clear which of them has to be used in some particular cases. Availability of many options helps to describe existing experimental data but it creates problems when it comes to predictions. Traditionally, different level density systematics are based on experimental data from neutron resonance spacing which are available for a limited spin interval and one parity only. On the other hand reaction cross section calculations use the total level density. This can create large uncertainties when converting the neutron resonance spacing to the total level density that results in sizable uncertainties in cross section calculations. It is clear now that total level densities need to be studied experimentally in a systematic manner. Such information can be obtained only from spectra of compound nuclear reactions. The question is does level densities obtained from compound nuclear reactions keep the same regularities as level densities obtained from neutron resonances- Are they consistent- We measured level densities of 59-64 Ni isotopes from proton evaporation spectra of 6,7 Li induced reactions. Experimental data are presented. Conclusions of how level density depends on the neutron number and on the degree of proximity to the closed shell ( 56 Ni) are drawn. The level density parameters have been compared with parameters obtained from the analysis of neutron resonances and from model predictions
How to live with uncertainties?
International Nuclear Information System (INIS)
Michel, R.
2012-01-01
In a short introduction, the problem of uncertainty as a general consequence of incomplete information as well as the approach to quantify uncertainty in metrology are addressed. A little history of the more than 30 years of the working group AK SIGMA is followed by an appraisal of its up-to-now achievements. Then, the potential future of the AK SIGMA is discussed based on its actual tasks and on open scientific questions and future topics. (orig.)
Some remarks on modeling uncertainties
International Nuclear Information System (INIS)
Ronen, Y.
1983-01-01
Several topics related to the question of modeling uncertainties are considered. The first topic is related to the use of the generalized bias operator method for modeling uncertainties. The method is expanded to a more general form of operators. The generalized bias operator is also used in the inverse problem and applied to determine the anisotropic scattering law. The last topic discussed is related to the question of the limit to accuracy and how to establish its value. (orig.) [de
Uncertainty analysis in safety assessment
International Nuclear Information System (INIS)
Lemos, Francisco Luiz de; Sullivan, Terry
1997-01-01
Nuclear waste disposal is a very complex subject which requires the study of many different fields of science, like hydro geology, meteorology, geochemistry, etc. In addition, the waste disposal facilities are designed to last for a very long period of time. Both of these conditions make safety assessment projections filled with uncertainty. This paper addresses approaches for treatment of uncertainties in the safety assessment modeling due to the variability of data and some current approaches used to deal with this problem. (author)
Optimal Taxation under Income Uncertainty
Xianhua Dai
2011-01-01
Optimal taxation under income uncertainty has been extensively developed in expected utility theory, but it is still open for inseparable utility function between income and effort. As an alternative of decision-making under uncertainty, prospect theory (Kahneman and Tversky (1979), Tversky and Kahneman (1992)) has been obtained empirical support, for example, Kahneman and Tversky (1979), and Camerer and Lowenstein (2003). It is beginning to explore optimal taxation in the context of prospect...
New Perspectives on Policy Uncertainty
Hlatshwayo, Sandile
2017-01-01
In recent years, the ubiquitous and intensifying nature of economic policy uncertainty has made it a popular explanation for weak economic performance in developed and developing markets alike. The primary channel for this effect is decreased and delayed investment as firms adopt a ``wait and see'' approach to irreversible investments (Bernanke, 1983; Dixit and Pindyck, 1994). Deep empirical examination of policy uncertainty's impact is rare because of the difficulty associated in measuring i...
Uncertainty Analysis of the NASA Glenn 8x6 Supersonic Wind Tunnel
Stephens, Julia; Hubbard, Erin; Walter, Joel; McElroy, Tyler
2016-01-01
This paper presents methods and results of a detailed measurement uncertainty analysis that was performed for the 8- by 6-foot Supersonic Wind Tunnel located at the NASA Glenn Research Center. The statistical methods and engineering judgments used to estimate elemental uncertainties are described. The Monte Carlo method of propagating uncertainty was selected to determine the uncertainty of calculated variables of interest. A detailed description of the Monte Carlo method as applied for this analysis is provided. Detailed uncertainty results for the uncertainty in average free stream Mach number as well as other variables of interest are provided. All results are presented as random (variation in observed values about a true value), systematic (potential offset between observed and true value), and total (random and systematic combined) uncertainty. The largest sources contributing to uncertainty are determined and potential improvement opportunities for the facility are investigated.
Pharmacological Fingerprints of Contextual Uncertainty.
Directory of Open Access Journals (Sweden)
Louise Marshall
2016-11-01
Full Text Available Successful interaction with the environment requires flexible updating of our beliefs about the world. By estimating the likelihood of future events, it is possible to prepare appropriate actions in advance and execute fast, accurate motor responses. According to theoretical proposals, agents track the variability arising from changing environments by computing various forms of uncertainty. Several neuromodulators have been linked to uncertainty signalling, but comprehensive empirical characterisation of their relative contributions to perceptual belief updating, and to the selection of motor responses, is lacking. Here we assess the roles of noradrenaline, acetylcholine, and dopamine within a single, unified computational framework of uncertainty. Using pharmacological interventions in a sample of 128 healthy human volunteers and a hierarchical Bayesian learning model, we characterise the influences of noradrenergic, cholinergic, and dopaminergic receptor antagonism on individual computations of uncertainty during a probabilistic serial reaction time task. We propose that noradrenaline influences learning of uncertain events arising from unexpected changes in the environment. In contrast, acetylcholine balances attribution of uncertainty to chance fluctuations within an environmental context, defined by a stable set of probabilistic associations, or to gross environmental violations following a contextual switch. Dopamine supports the use of uncertainty representations to engender fast, adaptive responses.
Communicating uncertainty in hydrological forecasts: mission impossible?
Ramos, Maria-Helena; Mathevet, Thibault; Thielen, Jutta; Pappenberger, Florian
2010-05-01
Cascading uncertainty in meteo-hydrological modelling chains for forecasting and integrated flood risk assessment is an essential step to improve the quality of hydrological forecasts. Although the best methodology to quantify the total predictive uncertainty in hydrology is still debated, there is a common agreement that one must avoid uncertainty misrepresentation and miscommunication, as well as misinterpretation of information by users. Several recent studies point out that uncertainty, when properly explained and defined, is no longer unwelcome among emergence response organizations, users of flood risk information and the general public. However, efficient communication of uncertain hydro-meteorological forecasts is far from being a resolved issue. This study focuses on the interpretation and communication of uncertain hydrological forecasts based on (uncertain) meteorological forecasts and (uncertain) rainfall-runoff modelling approaches to decision-makers such as operational hydrologists and water managers in charge of flood warning and scenario-based reservoir operation. An overview of the typical flow of uncertainties and risk-based decisions in hydrological forecasting systems is presented. The challenges related to the extraction of meaningful information from probabilistic forecasts and the test of its usefulness in assisting operational flood forecasting are illustrated with the help of two case-studies: 1) a study on the use and communication of probabilistic flood forecasting within the European Flood Alert System; 2) a case-study on the use of probabilistic forecasts by operational forecasters from the hydroelectricity company EDF in France. These examples show that attention must be paid to initiatives that promote or reinforce the active participation of expert forecasters in the forecasting chain. The practice of face-to-face forecast briefings, focusing on sharing how forecasters interpret, describe and perceive the model output forecasted
Estimates of bias and uncertainty in recorded external dose
International Nuclear Information System (INIS)
Fix, J.J.; Gilbert, E.S.; Baumgartner, W.V.
1994-10-01
A study is underway to develop an approach to quantify bias and uncertainty in recorded dose estimates for workers at the Hanford Site based on personnel dosimeter results. This paper focuses on selected experimental studies conducted to better define response characteristics of Hanford dosimeters. The study is more extensive than the experimental studies presented in this paper and includes detailed consideration and evaluation of other sources of bias and uncertainty. Hanford worker dose estimates are used in epidemiologic studies of nuclear workers. A major objective of these studies is to provide a direct assessment of the carcinogenic risk of exposure to ionizing radiation at low doses and dose rates. Considerations of bias and uncertainty in the recorded dose estimates are important in the conduct of this work. The method developed for use with Hanford workers can be considered an elaboration of the approach used to quantify bias and uncertainty in estimated doses for personnel exposed to radiation as a result of atmospheric testing of nuclear weapons between 1945 and 1962. This approach was first developed by a National Research Council (NRC) committee examining uncertainty in recorded film badge doses during atmospheric tests (NRC 1989). It involved quantifying both bias and uncertainty from three sources (i.e., laboratory, radiological, and environmental) and then combining them to obtain an overall assessment. Sources of uncertainty have been evaluated for each of three specific Hanford dosimetry systems (i.e., the Hanford two-element film dosimeter, 1944-1956; the Hanford multi-element film dosimeter, 1957-1971; and the Hanford multi-element TLD, 1972-1993) used to estimate personnel dose throughout the history of Hanford operations. Laboratory, radiological, and environmental sources of bias and uncertainty have been estimated based on historical documentation and, for angular response, on selected laboratory measurements
Fergus, Thomas A; Rowatt, Wade C
2015-03-01
Difficulties tolerating uncertainty are considered central to scrupulosity, a moral/religious presentation of obsessive-compulsive disorder (OCD). We examined whether uncertainty salience (i.e., exposure to a state of uncertainty) caused fears of sin and fears of God, as well as whether priming God concepts affected the impact of uncertainty salience on those fears. An internet sample of community adults (N = 120) who endorsed holding a belief in God or a higher power were randomly assigned to an experimental manipulation of (1) salience (uncertainty or insecurity) and (2) prime (God concepts or neutral). As predicted, participants who received the uncertainty salience and God concept priming reported the greatest fears of sin. There were no mean-level differences in the other conditions. The effect was not attributable to religiosity and the manipulations did not cause negative affect. We used a nonclinical sample recruited from the internet. These results support cognitive-behavioral models suggesting that religious uncertainty is important to scrupulosity. Implications of these results for future research are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Bayesian approach to model uncertainty
International Nuclear Information System (INIS)
Buslik, A.
1994-01-01
A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given
The NASA Langley Multidisciplinary Uncertainty Quantification Challenge
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.
Modeling of uncertainties in biochemical reactions.
Mišković, Ljubiša; Hatzimanikatis, Vassily
2011-02-01
Mathematical modeling is an indispensable tool for research and development in biotechnology and bioengineering. The formulation of kinetic models of biochemical networks depends on knowledge of the kinetic properties of the enzymes of the individual reactions. However, kinetic data acquired from experimental observations bring along uncertainties due to various experimental conditions and measurement methods. In this contribution, we propose a novel way to model the uncertainty in the enzyme kinetics and to predict quantitatively the responses of metabolic reactions to the changes in enzyme activities under uncertainty. The proposed methodology accounts explicitly for mechanistic properties of enzymes and physico-chemical and thermodynamic constraints, and is based on formalism from systems theory and metabolic control analysis. We achieve this by observing that kinetic responses of metabolic reactions depend: (i) on the distribution of the enzymes among their free form and all reactive states; (ii) on the equilibrium displacements of the overall reaction and that of the individual enzymatic steps; and (iii) on the net fluxes through the enzyme. Relying on this observation, we develop a novel, efficient Monte Carlo sampling procedure to generate all states within a metabolic reaction that satisfy imposed constrains. Thus, we derive the statistics of the expected responses of the metabolic reactions to changes in enzyme levels and activities, in the levels of metabolites, and in the values of the kinetic parameters. We present aspects of the proposed framework through an example of the fundamental three-step reversible enzymatic reaction mechanism. We demonstrate that the equilibrium displacements of the individual enzymatic steps have an important influence on kinetic responses of the enzyme. Furthermore, we derive the conditions that must be satisfied by a reversible three-step enzymatic reaction operating far away from the equilibrium in order to respond to
Experimental program at Fermilab
International Nuclear Information System (INIS)
Jovanovic, D.
1974-01-01
The experimental program at Fermilab is briefly surveyed: accelerators and experimental areas, current experiments such as elastic scattering of π +- , K +- , p +- , on proton and deuteron total cross sections, neutrino physics, high transverse momentum [fr
Validation uncertainty of MATRA code for subchannel void distributions
Energy Technology Data Exchange (ETDEWEB)
Hwang, Dae-Hyun; Kim, S. J.; Kwon, H.; Seo, K. W. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
To extend code capability to the whole core subchannel analysis, pre-conditioned Krylov matrix solvers such as BiCGSTAB and GMRES are implemented in MATRA code as well as parallel computing algorithms using MPI and OPENMP. It is coded by fortran 90, and has some user friendly features such as graphic user interface. MATRA code was approved by Korean regulation body for design calculation of integral-type PWR named SMART. The major role subchannel code is to evaluate core thermal margin through the hot channel analysis and uncertainty evaluation for CHF predictions. In addition, it is potentially used for the best estimation of core thermal hydraulic field by incorporating into multiphysics and/or multi-scale code systems. In this study we examined a validation process for the subchannel code MATRA specifically in the prediction of subchannel void distributions. The primary objective of validation is to estimate a range within which the simulation modeling error lies. The experimental data for subchannel void distributions at steady state and transient conditions was provided on the framework of OECD/NEA UAM benchmark program. The validation uncertainty of MATRA code was evaluated for a specific experimental condition by comparing the simulation result and experimental data. A validation process should be preceded by code and solution verification. However, quantification of verification uncertainty was not addressed in this study. The validation uncertainty of the MATRA code for predicting subchannel void distribution was evaluated for a single data point of void fraction measurement at a 5x5 PWR test bundle on the framework of OECD UAM benchmark program. The validation standard uncertainties were evaluated as 4.2%, 3.9%, and 2.8% with the Monte-Carlo approach at the axial levels of 2216 mm, 2669 mm, and 3177 mm, respectively. The sensitivity coefficient approach revealed similar results of uncertainties but did not account for the nonlinear effects on the
Uncertainty Estimates: A New Editorial Standard
International Nuclear Information System (INIS)
Drake, Gordon W.F.
2014-01-01
Full text: The objective of achieving higher standards for uncertainty estimates in the publication of theoretical data for atoms and molecules requires a concerted effort by both the authors of papers and the editors who send them out for peer review. In April, 2011, the editors of Physical Review A published an Editorial announcing a new standard that uncertainty estimates would be required whenever practicable, and in particular in the following circumstances: 1. If the authors claim high accuracy, or improvements on the accuracy of previous work. 2. If the primary motivation for the paper is to make comparisons with present or future high precision experimental measurements. 3. If the primary motivation is to provide interpolations or extrapolations of known experimental measurements. The new policy means that papers that do not meet these standards are not sent out for peer review until they have been suitably revised, and the authors are so notified immediately upon receipt. The policy has now been in effect for three years. (author
Cooke, Georga; Tapley, Amanda; Holliday, Elizabeth; Morgan, Simon; Henderson, Kim; Ball, Jean; van Driel, Mieke; Spike, Neil; Kerr, Rohan; Magin, Parker
2017-12-01
Tolerance for ambiguity is essential for optimal learning and professional competence. General practice trainees must be, or must learn to be, adept at managing clinical uncertainty. However, few studies have examined associations of intolerance of uncertainty in this group. The aim of this study was to establish levels of tolerance of uncertainty in Australian general practice trainees and associations of uncertainty with demographic, educational and training practice factors. A cross-sectional analysis was performed on the Registrar Clinical Encounters in Training (ReCEnT) project, an ongoing multi-site cohort study. Scores on three of the four independent subscales of the Physicians' Reaction to Uncertainty (PRU) instrument were analysed as outcome variables in linear regression models with trainee and practice factors as independent variables. A total of 594 trainees contributed data on a total of 1209 occasions. Trainees in earlier training terms had higher scores for 'Anxiety due to uncertainty', 'Concern about bad outcomes' and 'Reluctance to disclose diagnosis/treatment uncertainty to patients'. Beyond this, findings suggest two distinct sets of associations regarding reaction to uncertainty. Firstly, affective aspects of uncertainty (the 'Anxiety' and 'Concern' subscales) were associated with female gender, less experience in hospital prior to commencing general practice training, and graduation overseas. Secondly, a maladaptive response to uncertainty (the 'Reluctance to disclose' subscale) was associated with urban practice, health qualifications prior to studying medicine, practice in an area of higher socio-economic status, and being Australian-trained. This study has established levels of three measures of trainees' responses to uncertainty and associations with these responses. The current findings suggest differing 'phenotypes' of trainees with high 'affective' responses to uncertainty and those reluctant to disclose uncertainty to patients. More
Measurement uncertainty analysis techniques applied to PV performance measurements
Energy Technology Data Exchange (ETDEWEB)
Wells, C.
1992-10-01
The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.
Measurement uncertainty analysis techniques applied to PV performance measurements
Energy Technology Data Exchange (ETDEWEB)
Wells, C
1992-10-01
The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment`s final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.
Do Orthopaedic Surgeons Acknowledge Uncertainty?
Teunis, Teun; Janssen, Stein; Guitton, Thierry G; Ring, David; Parisien, Robert
2016-06-01
Much of the decision-making in orthopaedics rests on uncertain evidence. Uncertainty is therefore part of our normal daily practice, and yet physician uncertainty regarding treatment could diminish patients' health. It is not known if physician uncertainty is a function of the evidence alone or if other factors are involved. With added experience, uncertainty could be expected to diminish, but perhaps more influential are things like physician confidence, belief in the veracity of what is published, and even one's religious beliefs. In addition, it is plausible that the kind of practice a physician works in can affect the experience of uncertainty. Practicing physicians may not be immediately aware of these effects on how uncertainty is experienced in their clinical decision-making. We asked: (1) Does uncertainty and overconfidence bias decrease with years of practice? (2) What sociodemographic factors are independently associated with less recognition of uncertainty, in particular belief in God or other deity or deities, and how is atheism associated with recognition of uncertainty? (3) Do confidence bias (confidence that one's skill is greater than it actually is), degree of trust in the orthopaedic evidence, and degree of statistical sophistication correlate independently with recognition of uncertainty? We created a survey to establish an overall recognition of uncertainty score (four questions), trust in the orthopaedic evidence base (four questions), confidence bias (three questions), and statistical understanding (six questions). Seven hundred six members of the Science of Variation Group, a collaboration that aims to study variation in the definition and treatment of human illness, were approached to complete our survey. This group represents mainly orthopaedic surgeons specializing in trauma or hand and wrist surgery, practicing in Europe and North America, of whom the majority is involved in teaching. Approximately half of the group has more than 10 years
International Nuclear Information System (INIS)
Sandweiss, J.; Kycia, T.F.
1975-01-01
A discussion is given of the eight identical experimental insertions for the planned ISABELLE storage rings. Four sets of quadrupole doublets are used to match the β functions in the insertions to the values in the cells, and the total free space available at the crossing point is 40 meters. An asymmetric beam energy operation is planned, which will be useful in a number of experiments
Directory of Open Access Journals (Sweden)
Griffin Patrick
2017-01-01
Full Text Available A rigorous treatment of the uncertainty in the underlying nuclear data on silicon displacement damage metrics is presented. The uncertainty in the cross sections and recoil atom spectra are propagated into the energy-dependent uncertainty contribution in the silicon displacement kerma and damage energy using a Total Monte Carlo treatment. An energy-dependent covariance matrix is used to characterize the resulting uncertainty. A strong correlation between different reaction channels is observed in the high energy neutron contributions to the displacement damage metrics which supports the necessity of using a Monte Carlo based method to address the nonlinear nature of the uncertainty propagation.
A new method of body habitus correction for total body potassium measurements
International Nuclear Information System (INIS)
O'Hehir, S; Green, S; Beddoe, A H
2006-01-01
This paper describes an accurate and time-efficient method for the determination of total body potassium via a combination of measurements in the Birmingham whole body counter and the use of the Monte Carlo n-particle (MCNP) simulation code. In developing this method, MCNP has also been used to derive values for some components of the total measurement uncertainty which are difficult to quantify experimentally. A method is proposed for MCNP-assessed body habitus corrections based on a simple generic anthropomorphic model, scaled for individual height and weight. The use of this model increases patient comfort by reducing the need for comprehensive anthropomorphic measurements. The analysis shows that the total uncertainty in potassium weight determination by this whole body counting methodology for water-filled phantoms with a known amount of potassium is 2.7% (SD). The uncertainty in the method of body habitus correction (applicable also to phantom-based methods) is 1.5% (SD). It is concluded that this new strategy provides a sufficiently accurate model for routine clinical use
A new method of body habitus correction for total body potassium measurements
Energy Technology Data Exchange (ETDEWEB)
O' Hehir, S [University Hospital Birmingham Foundation NHS Trust, Birmingham (United Kingdom); Green, S [University Hospital Birmingham Foundation NHS Trust, Birmingham (United Kingdom); Beddoe, A H [University Hospital Birmingham Foundation NHS Trust, Birmingham (United Kingdom)
2006-09-07
This paper describes an accurate and time-efficient method for the determination of total body potassium via a combination of measurements in the Birmingham whole body counter and the use of the Monte Carlo n-particle (MCNP) simulation code. In developing this method, MCNP has also been used to derive values for some components of the total measurement uncertainty which are difficult to quantify experimentally. A method is proposed for MCNP-assessed body habitus corrections based on a simple generic anthropomorphic model, scaled for individual height and weight. The use of this model increases patient comfort by reducing the need for comprehensive anthropomorphic measurements. The analysis shows that the total uncertainty in potassium weight determination by this whole body counting methodology for water-filled phantoms with a known amount of potassium is 2.7% (SD). The uncertainty in the method of body habitus correction (applicable also to phantom-based methods) is 1.5% (SD). It is concluded that this new strategy provides a sufficiently accurate model for routine clinical use.
On total noncommutativity in quantum mechanics
Lahti, Pekka J.; Ylinen, Kari
1987-11-01
It is shown within the Hilbert space formulation of quantum mechanics that the total noncommutativity of any two physical quantities is necessary for their satisfying the uncertainty relation or for their being complementary. The importance of these results is illustrated with the canonically conjugate position and momentum of a free particle and of a particle closed in a box.
Limited entropic uncertainty as new principle of quantum physics
International Nuclear Information System (INIS)
Ion, D.B.; Ion, M.L.
2001-01-01
The Uncertainty Principle (UP) of quantum mechanics discovered by Heisenberg, which constitute the corner-stone of quantum physics, asserts that: there is an irreducible lower bound on the uncertainty in the result of a simultaneous measurement of non-commuting observables. In order to avoid this state-dependence many authors proposed to use the information entropy as a measure of the uncertainty instead of above standard quantitative formulation of the Heisenberg uncertainty principle. In this paper the Principle of Limited Entropic Uncertainty (LEU-Principle), as a new principle in quantum physics, is proved. Then, consistent experimental tests of the LEU-principle, obtained by using the available 49 sets of the pion-nucleus phase shifts, are presented for both, extensive (q=1) and nonextensive (q=0.5 and q=2.0) cases. Some results obtained by the application of LEU-Principle to the diffraction phenomena are also discussed. The main results and conclusions of our paper can be summarized as follows: (i) We introduced a new principle in quantum physics namely the Principle of Limited Entropic Uncertainty (LEU-Principle). This new principle includes in a more general and exact form not only the old Heisenberg uncertainty principle but also introduce an upper limit on the magnitude of the uncertainty in the quantum physics. The LEU-Principle asserts that: 'there is an irreducible lower bound as well as an upper bound on the uncertainty in the result of a simultaneous measurement of non-commuting observables for any extensive and nonextensive (q ≥ 0) quantum systems'; (ii) Two important concrete realizations of the LEU-Principle are explicitly obtained in this paper, namely: (a) the LEU-inequalities for the quantum scattering of spinless particles and (b) the LEU-inequalities for the diffraction on single slit of width 2a. In particular from our general results, in the limit y → +1 we recover in an exact form all the results previously reported. In our paper an
Critical loads - assessment of uncertainty
Energy Technology Data Exchange (ETDEWEB)
Barkman, A.
1998-10-01
The effects of data uncertainty in applications of the critical loads concept were investigated on different spatial resolutions in Sweden and northern Czech Republic. Critical loads of acidity (CL) were calculated for Sweden using the biogeochemical model PROFILE. Three methods with different structural complexity were used to estimate the adverse effects of S0{sub 2} concentrations in northern Czech Republic. Data uncertainties in the calculated critical loads/levels and exceedances (EX) were assessed using Monte Carlo simulations. Uncertainties within cumulative distribution functions (CDF) were aggregated by accounting for the overlap between site specific confidence intervals. Aggregation of data uncertainties within CDFs resulted in lower CL and higher EX best estimates in comparison with percentiles represented by individual sites. Data uncertainties were consequently found to advocate larger deposition reductions to achieve non-exceedance based on low critical loads estimates on 150 x 150 km resolution. Input data were found to impair the level of differentiation between geographical units at all investigated resolutions. Aggregation of data uncertainty within CDFs involved more constrained confidence intervals for a given percentile. Differentiation as well as identification of grid cells on 150 x 150 km resolution subjected to EX was generally improved. Calculation of the probability of EX was shown to preserve the possibility to differentiate between geographical units. Re-aggregation of the 95%-ile EX on 50 x 50 km resolution generally increased the confidence interval for each percentile. Significant relationships were found between forest decline and the three methods addressing risks induced by S0{sub 2} concentrations. Modifying S0{sub 2} concentrations by accounting for the length of the vegetation period was found to constitute the most useful trade-off between structural complexity, data availability and effects of data uncertainty. Data
Uncertainty Quantification in Numerical Aerodynamics
Litvinenko, Alexander
2017-05-16
We consider uncertainty quantification problem in aerodynamic simulations. We identify input uncertainties, classify them, suggest an appropriate statistical model and, finally, estimate propagation of these uncertainties into the solution (pressure, velocity and density fields as well as the lift and drag coefficients). The deterministic problem under consideration is a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. Input uncertainties include: uncertain angle of attack, the Mach number, random perturbations in the airfoil geometry, mesh, shock location, turbulence model and parameters of this turbulence model. This problem requires efficient numerical/statistical methods since it is computationally expensive, especially for the uncertainties caused by random geometry variations which involve a large number of variables. In numerical section we compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry [D.Liu et al \\'17]. For modeling we used the TAU code, developed in DLR, Germany.
Uncertainty in spatial planning proceedings
Directory of Open Access Journals (Sweden)
Aleš Mlakar
2009-01-01
Full Text Available Uncertainty is distinctive of spatial planning as it arises from the necessity to co-ordinate the various interests within the area, from the urgency of adopting spatial planning decisions, the complexity of the environment, physical space and society, addressing the uncertainty of the future and from the uncertainty of actually making the right decision. Response to uncertainty is a series of measures that mitigate the effects of uncertainty itself. These measures are based on two fundamental principles – standardization and optimization. The measures are related to knowledge enhancement and spatial planning comprehension, in the legal regulation of changes, in the existence of spatial planning as a means of different interests co-ordination, in the active planning and the constructive resolution of current spatial problems, in the integration of spatial planning and the environmental protection process, in the implementation of the analysis as the foundation of spatial planners activities, in the methods of thinking outside the parameters, in forming clear spatial concepts and in creating a transparent management spatial system and also in the enforcement the participatory processes.
Uncertainty modeling and decision support
International Nuclear Information System (INIS)
Yager, Ronald R.
2004-01-01
We first formulate the problem of decision making under uncertainty. The importance of the representation of our knowledge about the uncertainty in formulating a decision process is pointed out. We begin with a brief discussion of the case of probabilistic uncertainty. Next, in considerable detail, we discuss the case of decision making under ignorance. For this case the fundamental role of the attitude of the decision maker is noted and its subjective nature is emphasized. Next the case in which a Dempster-Shafer belief structure is used to model our knowledge of the uncertainty is considered. Here we also emphasize the subjective choices the decision maker must make in formulating a decision function. The case in which the uncertainty is represented by a fuzzy measure (monotonic set function) is then investigated. We then return to the Dempster-Shafer belief structure and show its relationship to the fuzzy measure. This relationship allows us to get a deeper understanding of the formulation the decision function used Dempster- Shafer framework. We discuss how this deeper understanding allows a decision analyst to better make the subjective choices needed in the formulation of the decision function
Uncertainty Quantification of Multi-Phase Closures
Energy Technology Data Exchange (ETDEWEB)
Nadiga, Balasubramanya T. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Baglietto, Emilio [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)
2017-10-27
In the ensemble-averaged dispersed phase formulation used for CFD of multiphase ows in nuclear reactor thermohydraulics, closures of interphase transfer of mass, momentum, and energy constitute, by far, the biggest source of error and uncertainty. Reliable estimators of this source of error and uncertainty are currently non-existent. Here, we report on how modern Validation and Uncertainty Quanti cation (VUQ) techniques can be leveraged to not only quantify such errors and uncertainties, but also to uncover (unintended) interactions between closures of di erent phenomena. As such this approach serves as a valuable aide in the research and development of multiphase closures. The joint modeling of lift, drag, wall lubrication, and turbulent dispersion|forces that lead to tranfer of momentum between the liquid and gas phases|is examined in the frame- work of validation of the adiabatic but turbulent experiments of Liu and Banko , 1993. An extensive calibration study is undertaken with a popular combination of closure relations and the popular k-ϵ turbulence model in a Bayesian framework. When a wide range of super cial liquid and gas velocities and void fractions is considered, it is found that this set of closures can be validated against the experimental data only by allowing large variations in the coe cients associated with the closures. We argue that such an extent of variation is a measure of uncertainty induced by the chosen set of closures. We also nd that while mean uid velocity and void fraction pro les are properly t, uctuating uid velocity may or may not be properly t. This aspect needs to be investigated further. The popular set of closures considered contains ad-hoc components and are undesirable from a predictive modeling point of view. Consequently, we next consider improvements that are being developed by the MIT group under CASL and which remove the ad-hoc elements. We use non-intrusive methodologies for sensitivity analysis and calibration (using
Evaluation of the theoretical uncertainties in the W → lν cross sections at the LHC
International Nuclear Information System (INIS)
Adam, Nadia E.; Halyo, Valerie; Zhu Wenhan; Yost, Scott A.
2008-01-01
We study the sources of systematic errors in the measurement of the W → lν cross-sections at the LHC. We consider the systematic errors in both the total cross-section and acceptance for anticipated experimental cuts. We include the best available analysis of QCD effects at NNLO in assessing the effect of higher order corrections and PDF and scale uncertainties on the theoretical acceptance. In addition, we evaluate the error due to missing NLO electroweak corrections and propose which MC generators and computational schemes should be implemented to best simulate the events.
International Nuclear Information System (INIS)
Davis, C.B.
1987-08-01
The uncertainties of calculations of loss-of-feedwater transients at Davis-Besse Unit 1 were determined to address concerns of the US Nuclear Regulatory Commission relative to the effectiveness of feed and bleed cooling. Davis-Besse Unit 1 is a pressurized water reactor of the raised-loop Babcock and Wilcox design. A detailed, quality-assured RELAP5/MOD2 model of Davis-Besse was developed at the Idaho National Engineering Laboratory. The model was used to perform an analysis of the loss-of-feedwater transient that occurred at Davis-Besse on June 9, 1985. A loss-of-feedwater transient followed by feed and bleed cooling was also calculated. The evaluation of uncertainty was based on the comparisons of calculations and data, comparisons of different calculations of the same transient, sensitivity calculations, and the propagation of the estimated uncertainty in initial and boundary conditions to the final calculated results
Decommissioning Funding: Ethics, Implementation, Uncertainties
International Nuclear Information System (INIS)
2007-01-01
This status report on decommissioning funding: ethics, implementation, uncertainties is based on a review of recent literature and materials presented at NEA meetings in 2003 and 2004, and particularly at a topical session organised in November 2004 on funding issues associated with the decommissioning of nuclear power facilities. The report also draws on the experience of the NEA Working Party on Decommissioning and Dismantling (WPDD). This report offers, in a concise form, an overview of relevant considerations on decommissioning funding mechanisms with regard to ethics, implementation and uncertainties. Underlying ethical principles found in international agreements are identified, and factors influencing the accumulation and management of funds for decommissioning nuclear facilities are discussed together with the main sources of uncertainties of funding systems
Correlated uncertainties in integral data
International Nuclear Information System (INIS)
McCracken, A.K.
1978-01-01
The use of correlated uncertainties in calculational data is shown in cases investigated to lead to a reduction in the uncertainty of calculated quantities of importance to reactor design. It is stressed however that such reductions are likely to be important in a minority of cases of practical interest. The effect of uncertainties in detector cross-sections is considered and is seen to be, in some cases, of equal importance to that in the data used in calculations. Numerical investigations have been limited by the sparse information available on data correlations; some comparisons made of these data reveal quite large inconsistencies for both detector cross-sections and cross-section of interest for reactor calculations
Uncertainty and Sensitivity Analyses Plan
International Nuclear Information System (INIS)
Simpson, J.C.; Ramsdell, J.V. Jr.
1993-04-01
Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project
Uncertainty propagation through dynamic models of assemblies of mechanical structures
International Nuclear Information System (INIS)
Daouk, Sami
2016-01-01
When studying the behaviour of mechanical systems, mathematical models and structural parameters are usually considered deterministic. Return on experience shows however that these elements are uncertain in most cases, due to natural variability or lack of knowledge. Therefore, quantifying the quality and reliability of the numerical model of an industrial assembly remains a major question in low-frequency dynamics. The purpose of this thesis is to improve the vibratory design of bolted assemblies through setting up a dynamic connector model that takes account of different types and sources of uncertainty on stiffness parameters, in a simple, efficient and exploitable in industrial context. This work has been carried out in the framework of the SICODYN project, led by EDF R and D, that aims to characterise and quantify, numerically and experimentally, the uncertainties in the dynamic behaviour of bolted industrial assemblies. Comparative studies of several numerical methods of uncertainty propagation demonstrate the advantage of using the Lack-Of-Knowledge theory. An experimental characterisation of uncertainties in bolted structures is performed on a dynamic test rig and on an industrial assembly. The propagation of many small and large uncertainties through different dynamic models of mechanical assemblies leads to the assessment of the efficiency of the Lack-Of-Knowledge theory and its applicability in an industrial environment. (author)
Uncertainty analysis in safety assessment
Energy Technology Data Exchange (ETDEWEB)
Lemos, Francisco Luiz de [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN), Belo Horizonte, MG (Brazil); Sullivan, Terry [Brookhaven National Lab., Upton, NY (United States)
1997-12-31
Nuclear waste disposal is a very complex subject which requires the study of many different fields of science, like hydro geology, meteorology, geochemistry, etc. In addition, the waste disposal facilities are designed to last for a very long period of time. Both of these conditions make safety assessment projections filled with uncertainty. This paper addresses approaches for treatment of uncertainties in the safety assessment modeling due to the variability of data and some current approaches used to deal with this problem. (author) 13 refs.; e-mail: lemos at bnl.gov; sulliva1 at bnl.gov
Awe, uncertainty, and agency detection.
Valdesolo, Piercarlo; Graham, Jesse
2014-01-01
Across five studies, we found that awe increases both supernatural belief (Studies 1, 2, and 5) and intentional-pattern perception (Studies 3 and 4)-two phenomena that have been linked to agency detection, or the tendency to interpret events as the consequence of intentional and purpose-driven agents. Effects were both directly and conceptually replicated, and mediational analyses revealed that these effects were driven by the influence of awe on tolerance for uncertainty. Experiences of awe decreased tolerance for uncertainty, which, in turn, increased the tendency to believe in nonhuman agents and to perceive human agency in random events.
Linear Programming Problems for Generalized Uncertainty
Thipwiwatpotjana, Phantipa
2010-01-01
Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…
Uncertainty in estimating and mitigating industrial related GHG emissions
International Nuclear Information System (INIS)
El-Fadel, M.; Zeinati, M.; Ghaddar, N.; Mezher, T.
2001-01-01
Global climate change has been one of the challenging environmental concerns facing policy makers in the past decade. The characterization of the wide range of greenhouse gas emissions sources and sinks as well as their behavior in the atmosphere remains an on-going activity in many countries. Lebanon, being a signatory to the Framework Convention on Climate Change, is required to submit and regularly update a national inventory of greenhouse gas emissions sources and removals. Accordingly, an inventory of greenhouse gases from various sectors was conducted following the guidelines set by the United Nations Intergovernmental Panel on Climate Change (IPCC). The inventory indicated that the industrial sector contributes about 29% to the total greenhouse gas emissions divided between industrial processes and energy requirements at 12 and 17%, respectively. This paper describes major mitigation scenarios to reduce emissions from this sector based on associated technical, economic, environmental, and social characteristics. Economic ranking of these scenarios was conducted and uncertainty in emission factors used in the estimation process was emphasized. For this purpose, theoretical and experimental emission factors were used as alternatives to default factors recommended by the IPCC and the significance of resulting deviations in emission estimation is presented. (author)
International Nuclear Information System (INIS)
Austregesilo, Henrique; Bals, Christine; Trambauer, Klaus
2007-01-01
In the frame of developmental assessment and code validation, a post-test calculation of the test QUENCH-07 was performed with ATHLET-CD. The system code ATHLET-CD is being developed for best-estimate simulation of accidents with core degradation and for evaluation of accident management procedures. It applies the detailed models of the thermal-hydraulic code ATHLET in an efficient coupling with dedicated models for core degradation and fission products behaviour. The first step of the work was the simulation of the test QUENCH-07 applying the modelling options recommended in the code User's Manual (reference calculation). The global results of this calculation showed a good agreement with the measured data. This calculation was complemented by a sensitivity analysis in order to investigate the influence of a combined variation of code input parameters on the simulation of the main phenomena observed experimentally. Results of this sensitivity analysis indicate that the main experimental measurements lay within the uncertainty range of the corresponding calculated values. Among the main contributors to the uncertainty of code results are the heat transfer coefficient due to forced convection to superheated steam-argon mixture, the thermal conductivity of the shroud isolation and the external heater rod resistance. Uncertainties on modelling of B 4 C oxidation do not affect significantly the total calculated hydrogen release rates
New challenges on uncertainty propagation assessment of flood risk analysis
Martins, Luciano; Aroca-Jiménez, Estefanía; Bodoque, José M.; Díez-Herrero, Andrés
2016-04-01
Natural hazards, such as floods, cause considerable damage to the human life, material and functional assets every year and around the World. Risk assessment procedures has associated a set of uncertainties, mainly of two types: natural, derived from stochastic character inherent in the flood process dynamics; and epistemic, that are associated with lack of knowledge or the bad procedures employed in the study of these processes. There are abundant scientific and technical literature on uncertainties estimation in each step of flood risk analysis (e.g. rainfall estimates, hydraulic modelling variables); but very few experience on the propagation of the uncertainties along the flood risk assessment. Therefore, epistemic uncertainties are the main goal of this work, in particular,understand the extension of the propagation of uncertainties throughout the process, starting with inundability studies until risk analysis, and how far does vary a proper analysis of the risk of flooding. These methodologies, such as Polynomial Chaos Theory (PCT), Method of Moments or Monte Carlo, are used to evaluate different sources of error, such as data records (precipitation gauges, flow gauges...), hydrologic and hydraulic modelling (inundation estimation), socio-demographic data (damage estimation) to evaluate the uncertainties propagation (UP) considered in design flood risk estimation both, in numerical and cartographic expression. In order to consider the total uncertainty and understand what factors are contributed most to the final uncertainty, we used the method of Polynomial Chaos Theory (PCT). It represents an interesting way to handle to inclusion of uncertainty in the modelling and simulation process. PCT allows for the development of a probabilistic model of the system in a deterministic setting. This is done by using random variables and polynomials to handle the effects of uncertainty. Method application results have a better robustness than traditional analysis
Uncertainty, probability and information-gaps
International Nuclear Information System (INIS)
Ben-Haim, Yakov
2004-01-01
This paper discusses two main ideas. First, we focus on info-gap uncertainty, as distinct from probability. Info-gap theory is especially suited for modelling and managing uncertainty in system models: we invest all our knowledge in formulating the best possible model; this leaves the modeller with very faulty and fragmentary information about the variation of reality around that optimal model. Second, we examine the interdependence between uncertainty modelling and decision-making. Good uncertainty modelling requires contact with the end-use, namely, with the decision-making application of the uncertainty model. The most important avenue of uncertainty-propagation is from initial data- and model-uncertainties into uncertainty in the decision-domain. Two questions arise. Is the decision robust to the initial uncertainties? Is the decision prone to opportune windfall success? We apply info-gap robustness and opportunity functions to the analysis of representation and propagation of uncertainty in several of the Sandia Challenge Problems
Chapter 3: Traceability and uncertainty
International Nuclear Information System (INIS)
McEwen, Malcolm
2014-01-01
Chapter 3 presents: an introduction; Traceability (measurement standard, role of the Bureau International des Poids et Mesures, Secondary Standards Laboratories, documentary standards and traceability as process review); Uncertainty (Example 1 - Measurement, M raw (SSD), Example 2 - Calibration data, N D.w 60 Co, kQ, Example 3 - Correction factor, P TP ) and Conclusion
Competitive Capacity Investment under Uncertainty
X. Li (Xishu); R.A. Zuidwijk (Rob); M.B.M. de Koster (René); R. Dekker (Rommert)
2016-01-01
textabstractWe consider a long-term capacity investment problem in a competitive market under demand uncertainty. Two firms move sequentially in the competition and a firm’s capacity decision interacts with the other firm’s current and future capacity. Throughout the investment race, a firm can
Uncertainty quantification and error analysis
Energy Technology Data Exchange (ETDEWEB)
Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL
2010-01-01
UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.
Uncertainties in radioecological assessment models
International Nuclear Information System (INIS)
Hoffman, F.O.; Miller, C.W.; Ng, Y.C.
1983-01-01
Environmental radiological assessments rely heavily on the use of mathematical models. The predictions of these models are inherently uncertain because models are inexact representations of real systems. The major sources of this uncertainty are related to bias in model formulation and imprecision in parameter estimation. The magnitude of uncertainty is a function of the questions asked of the model and the specific radionuclides and exposure pathways of dominant importance. It is concluded that models developed as research tools should be distinguished from models developed for assessment applications. Furthermore, increased model complexity does not necessarily guarantee increased accuracy. To improve the realism of assessment modeling, stochastic procedures are recommended that translate uncertain parameter estimates into a distribution of predicted values. These procedures also permit the importance of model parameters to be ranked according to their relative contribution to the overall predicted uncertainty. Although confidence in model predictions can be improved through site-specific parameter estimation and increased model validation, health risk factors and internal dosimetry models will probably remain important contributors to the amount of uncertainty that is irreducible. 41 references, 4 figures, 4 tables
Numerical modeling of economic uncertainty
DEFF Research Database (Denmark)
Schjær-Jacobsen, Hans
2007-01-01
Representation and modeling of economic uncertainty is addressed by different modeling methods, namely stochastic variables and probabilities, interval analysis, and fuzzy numbers, in particular triple estimates. Focusing on discounted cash flow analysis numerical results are presented, comparisons...... are made between alternative modeling methods, and characteristics of the methods are discussed....
Uncertainty covariances in robotics applications
International Nuclear Information System (INIS)
Smith, D.L.
1984-01-01
The application of uncertainty covariance matrices in the analysis of robot trajectory errors is explored. First, relevant statistical concepts are reviewed briefly. Then, a simple, hypothetical robot model is considered to illustrate methods for error propagation and performance test data evaluation. The importance of including error correlations is emphasized
Regulating renewable resources under uncertainty
DEFF Research Database (Denmark)
Hansen, Lars Gårn
) that a pro-quota result under uncertainty about prices and marginal costs is unlikely, requiring that the resource growth function is highly concave locally around the optimum and, 3) that quotas are always preferred if uncertainly about underlying structural economic parameters dominates. These results...... showing that quotas are preferred in a number of situations qualify the pro fee message dominating prior studies....
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 4; Issue 2. Uncertainty in the Real World - Fuzzy Sets. Satish Kumar. General Article Volume 4 Issue 2 February 1999 pp 37-47. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/004/02/0037-0047 ...
Uncertainty of dustfall monitoring results
Directory of Open Access Journals (Sweden)
Martin A. van Nierop
2017-06-01
Full Text Available Fugitive dust has the ability to cause a nuisance and pollute the ambient environment, particularly from human activities including construction and industrial sites and mining operations. As such, dustfall monitoring has occurred for many decades in South Africa; little has been published on the repeatability, uncertainty, accuracy and precision of dustfall monitoring. Repeatability assesses the consistency associated with the results of a particular measurement under the same conditions; the consistency of the laboratory is assessed to determine the uncertainty associated with dustfall monitoring conducted by the laboratory. The aim of this study was to improve the understanding of the uncertainty in dustfall monitoring; thereby improving the confidence in dustfall monitoring. Uncertainty of dustfall monitoring was assessed through a 12-month study of 12 sites that were located on the boundary of the study area. Each site contained a directional dustfall sampler, which was modified by removing the rotating lid, with four buckets (A, B, C and D installed. Having four buckets on one stand allows for each bucket to be exposed to the same conditions, for the same period of time; therefore, should have equal amounts of dust deposited in these buckets. The difference in the weight (mg of the dust recorded from each bucket at each respective site was determined using the American Society for Testing and Materials method D1739 (ASTM D1739. The variability of the dust would provide the confidence level of dustfall monitoring when reporting to clients.
Knowledge Uncertainty and Composed Classifier
Czech Academy of Sciences Publication Activity Database
Klimešová, Dana; Ocelíková, E.
2007-01-01
Roč. 1, č. 2 (2007), s. 101-105 ISSN 1998-0140 Institutional research plan: CEZ:AV0Z10750506 Keywords : Boosting architecture * contextual modelling * composed classifier * knowledge management, * knowledge * uncertainty Subject RIV: IN - Informatics, Computer Science
Uncertainty propagation in nuclear forensics
International Nuclear Information System (INIS)
Pommé, S.; Jerome, S.M.; Venchiarutti, C.
2014-01-01
Uncertainty propagation formulae are presented for age dating in support of nuclear forensics. The age of radioactive material in this context refers to the time elapsed since a particular radionuclide was chemically separated from its decay product(s). The decay of the parent radionuclide and ingrowth of the daughter nuclide are governed by statistical decay laws. Mathematical equations allow calculation of the age of specific nuclear material through the atom ratio between parent and daughter nuclides, or through the activity ratio provided that the daughter nuclide is also unstable. The derivation of the uncertainty formulae of the age may present some difficulty to the user community and so the exact solutions, some approximations, a graphical representation and their interpretation are presented in this work. Typical nuclides of interest are actinides in the context of non-proliferation commitments. The uncertainty analysis is applied to a set of important parent–daughter pairs and the need for more precise half-life data is examined. - Highlights: • Uncertainty propagation formulae for age dating with nuclear chronometers. • Applied to parent–daughter pairs used in nuclear forensics. • Investigated need for better half-life data
WASH-1400: quantifying the uncertainties
International Nuclear Information System (INIS)
Erdmann, R.C.; Leverenz, F.L. Jr.; Lellouche, G.S.
1981-01-01
The purpose of this paper is to focus on the limitations of the WASH-1400 analysis in estimating the risk from light water reactors (LWRs). This assessment attempts to modify the quantification of the uncertainty in and estimate of risk as presented by the RSS (reactor safety study). 8 refs
Model uncertainty in growth empirics
Prüfer, P.
2008-01-01
This thesis applies so-called Bayesian model averaging (BMA) to three different economic questions substantially exposed to model uncertainty. Chapter 2 addresses a major issue of modern development economics: the analysis of the determinants of pro-poor growth (PPG), which seeks to combine high
Health information seeking and the World Wide Web: an uncertainty management perspective.
Rains, Stephen A
2014-01-01
Uncertainty management theory was applied in the present study to offer one theoretical explanation for how individuals use the World Wide Web to acquire health information and to help better understand the implications of the Web for information seeking. The diversity of information sources available on the Web and potential to exert some control over the depth and breadth of one's information-acquisition effort is argued to facilitate uncertainty management. A total of 538 respondents completed a questionnaire about their uncertainty related to cancer prevention and information-seeking behavior. Consistent with study predictions, use of the Web for information seeking interacted with respondents' desired level of uncertainty to predict their actual level of uncertainty about cancer prevention. The results offer evidence that respondents who used the Web to search for cancer information were better able than were respondents who did not seek information to achieve a level of uncertainty commensurate with the level of uncertainty they desired.
Enhancing uncertainty tolerance in the modelling creep of ligaments
International Nuclear Information System (INIS)
Taha, M M Reda; Lucero, J
2006-01-01
The difficulty in performing biomechanical tests and the scarcity of biomechanical experimental databases necessitate extending the current knowledge base to allow efficient modelling using limited data sets. This study suggests a framework to reduce uncertainties in biomechanical systems using limited data sets. The study also shows how sparse data and epistemic input can be exploited using fuzzy logic to represent biomechanical relations. An example application to model collagen fibre recruitment in the medial collateral ligaments during time-dependent deformation under cyclic loading (creep) is presented. The study suggests a quality metric that can be employed to observe and enhance uncertainty tolerance in the modelling process
Inherent uncertainties in meteorological parameters for wind turbine design
Doran, J. C.
1982-01-01
Major difficulties associated with meteorological measurments such as the inability to duplicate the experimental conditions from one day to the next are discussed. This lack of consistency is compounded by the stochastic nature of many of the meteorological variables of interest. Moreover, simple relationships derived in one location may be significantly altered by topographical or synoptic differences encountered at another. The effect of such factors is a degree of inherent uncertainty if an attempt is made to describe the atmosphere in terms of universal laws. Some of these uncertainties and their causes are examined, examples are presented and some implications for wind turbine design are suggested.
Energy Technology Data Exchange (ETDEWEB)
NONE
1998-03-01
Study was made on the design of experiments for confirming the technological and economical feasibility of commercial production of Mn nodules. An experimental sea area was selected considering Mn nodule characteristics such as concentration, continuity of nodule distribution and nodule size, and sea bottom topographic features such as flatness and exposed bed rock disturbing traveling of nodule collectors. The experimental plan was decided on the basis of the survey and study on an experimental scale, various requirements, weather and sea weather features of the sea area, and sea bottom conditions. Based on the experimental plan, various verification tests and simulation analyses were carried out, and functions of various equipment were confirmed. The conceptual design of a mining system assumed both pump-lift and air-lift systems, and yearly production rates of 2 and 3 million tons by 2 mining barges, and summarized main points of every system. As the evaluation result, the future direction of commercial production of Mn nodules was clarified to a certain extent. 5 figs., 21 tabs.
A novel dose uncertainty model and its application for dose verification
International Nuclear Information System (INIS)
Jin Hosang; Chung Heetaek; Liu Chihray; Palta, Jatinder; Suh, Tae-Suk; Kim, Siyong
2005-01-01
Based on statistical approach, a novel dose uncertainty model was introduced considering both nonspatial and spatial dose deviations. Non-space-oriented uncertainty is mainly caused by dosimetric uncertainties, and space-oriented dose uncertainty is the uncertainty caused by all spatial displacements. Assuming these two parts are independent, dose difference between measurement and calculation is a linear combination of nonspatial and spatial dose uncertainties. Two assumptions were made: (1) the relative standard deviation of nonspatial dose uncertainty is inversely proportional to the dose standard deviation σ, and (2) the spatial dose uncertainty is proportional to the gradient of dose. The total dose uncertainty is a quadratic sum of the nonspatial and spatial uncertainties. The uncertainty model provides the tolerance dose bound for comparison between calculation and measurement. In the statistical uncertainty model based on a Gaussian distribution, a confidence level of 3σ theoretically confines 99.74% of measurements within the bound. By setting the confidence limit, the tolerance bound for dose comparison can be made analogous to that of existing dose comparison methods (e.g., a composite distribution analysis, a γ test, a χ evaluation, and a normalized agreement test method). However, the model considers the inherent dose uncertainty characteristics of the test points by taking into account the space-specific history of dose accumulation, while the previous methods apply a single tolerance criterion to the points, although dose uncertainty at each point is significantly different from others. Three types of one-dimensional test dose distributions (a single large field, a composite flat field made by two identical beams, and three-beam intensity-modulated fields) were made to verify the robustness of the model. For each test distribution, the dose bound predicted by the uncertainty model was compared with simulated measurements. The simulated
Uncertainty governance: an integrated framework for managing and communicating uncertainties
International Nuclear Information System (INIS)
Umeki, H.; Naito, M.; Takase, H.
2004-01-01
Treatment of uncertainty, or in other words, reasoning with imperfect information is widely recognised as being of great importance within performance assessment (PA) of the geological disposal mainly because of the time scale of interest and spatial heterogeneity that geological environment exhibits. A wide range of formal methods have been proposed for the optimal processing of incomplete information. Many of these methods rely on the use of numerical information, the frequency based concept of probability in particular, to handle the imperfections. However, taking quantitative information as a base for models that solve the problem of handling imperfect information merely creates another problem, i.e., how to provide the quantitative information. In many situations this second problem proves more resistant to solution, and in recent years several authors have looked at a particularly ingenious way in accordance with the rules of well-founded methods such as Bayesian probability theory, possibility theory, and the Dempster-Shafer theory of evidence. Those methods, while drawing inspiration from quantitative methods, do not require the kind of complete numerical information required by quantitative methods. Instead they provide information that, though less precise than that provided by quantitative techniques, is often, if not sufficient, the best that could be achieved. Rather than searching for the best method for handling all imperfect information, our strategy for uncertainty management, that is recognition and evaluation of uncertainties associated with PA followed by planning and implementation of measures to reduce them, is to use whichever method best fits the problem at hand. Such an eclectic position leads naturally to integration of the different formalisms. While uncertainty management based on the combination of semi-quantitative methods forms an important part of our framework for uncertainty governance, it only solves half of the problem
International Nuclear Information System (INIS)
Kokoouline, V.; Richardson, W.
2014-01-01
Uncertainties in theoretical calculations may include: • Systematic uncertainty: Due to applicability limits of the chosen model. • Random: Within a model, uncertainties of model parameters result in uncertainties of final results (such as cross sections). • If uncertainties of experimental and theoretical data are known, for the purpose of data evaluation (to produce recommended data), one should combine two data sets to produce the best guess data with the smallest possible uncertainty. In many situations, it is possible to assess the accuracy of theoretical calculations because theoretical models usually rely on parameters that are uncertain, but not completely random, i.e. the uncertainties of the parameters of the models are approximately known. If there are one or several such parameters with corresponding uncertainties, even if some or all parameters are correlated, the above approach gives a conceptually simple way to calculate uncertainties of final cross sections (uncertainty propagation). Numerically, the statistical approach to the uncertainty propagation could be computationally expensive. However, in situations, where uncertainties are considered to be as important as the actual cross sections (for data validation or benchmark calculations, for example), such a numerical effort is justified. Having data from different sources (say, from theory and experiment), a systematic statistical approach allows one to compare the data and produce “unbiased” evaluated data with improved uncertainties, if uncertainties of initial data from different sources are available. Without uncertainties, the data evaluation/validation becomes impossible. This is the reason why theoreticians should assess the accuracy of their calculations in one way or another. A statistical and systematic approach, similar to the described above, is preferable.
Charm quark mass with calibrated uncertainty
Energy Technology Data Exchange (ETDEWEB)
Erler, Jens [Universidad Nacional Autonoma de Mexico, Instituto de Fisica, Mexico, DF (Mexico); Masjuan, Pere [Universitat Autonoma de Barcelona, Grup de Fisica Teorica, Departament de Fisica, Barcelona (Spain); Institut de Fisica d' Altes Energies (IFAE), The Barcelona Institute of Science and Technology (BIST), Barcelona (Spain); Spiesberger, Hubert [Johannes Gutenberg-Universitaet, PRISMA Cluster of Excellence, Institut fuer Physik, Mainz (Germany); University of Cape Town, Centre for Theoretical and Mathematical Physics and Department of Physics, Rondebosch (South Africa)
2017-02-15
We determine the charm quark mass m{sub c} from QCD sum rules of the moments of the vector current correlator calculated in perturbative QCD at O(α{sub s}{sup 3}). Only experimental data for the charm resonances below the continuum threshold are needed in our approach, while the continuum contribution is determined by requiring self-consistency between various sum rules, including the one for the zeroth moment. Existing data from the continuum region can then be used to bound the theoretic uncertainty. Our result is m{sub c}(m{sub c}) = 1272 ± 8 MeV for α{sub s}(M{sub Z}) = 0.1182, where the central value is in very good agreement with other recent determinations based on the relativistic sum rule approach. On the other hand, there is considerably less agreement regarding the theory dominated uncertainty and we pay special attention to the question how to quantify and justify it. (orig.)
Veneziano, D.; Agarwal, A.; Karaca, E.
2009-01-01
The problem of accounting for epistemic uncertainty in risk management decisions is conceptually straightforward, but is riddled with practical difficulties. Simple approximations are often used whereby future variations in epistemic uncertainty are ignored or worst-case scenarios are postulated. These strategies tend to produce sub-optimal decisions. We develop a general framework based on Bayesian decision theory and exemplify it for the case of seismic design of buildings. When temporal fluctuations of the epistemic uncertainties and regulatory safety constraints are included, the optimal level of seismic protection exceeds the normative level at the time of construction. Optimal Bayesian decisions do not depend on the aleatory or epistemic nature of the uncertainties, but only on the total (epistemic plus aleatory) uncertainty and how that total uncertainty varies randomly during the lifetime of the project. ?? 2009 Elsevier Ltd. All rights reserved.
Sonic Boom Pressure Signature Uncertainty Calculation and Propagation to Ground Noise
West, Thomas K., IV; Bretl, Katherine N.; Walker, Eric L.; Pinier, Jeremy T.
2015-01-01
The objective of this study was to outline an approach for the quantification of uncertainty in sonic boom measurements and to investigate the effect of various near-field uncertainty representation approaches on ground noise predictions. These approaches included a symmetric versus asymmetric uncertainty band representation and a dispersion technique based on a partial sum Fourier series that allows for the inclusion of random error sources in the uncertainty. The near-field uncertainty was propagated to the ground level, along with additional uncertainty in the propagation modeling. Estimates of perceived loudness were obtained for the various types of uncertainty representation in the near-field. Analyses were performed on three configurations of interest to the sonic boom community: the SEEB-ALR, the 69o DeltaWing, and the LM 1021-01. Results showed that representation of the near-field uncertainty plays a key role in ground noise predictions. Using a Fourier series based dispersion approach can double the amount of uncertainty in the ground noise compared to a pure bias representation. Compared to previous computational fluid dynamics results, uncertainty in ground noise predictions were greater when considering the near-field experimental uncertainty.
Radon measurements: the sources of uncertainties
International Nuclear Information System (INIS)
Zhukovsky, Michael; Onischenko, Alexandra; Bastrikov, Vladislav
2008-01-01
uncertainties for retrospective measurements conducted by surface traps techniques can be divided in two groups: errors of surface 210 Pb ( 210 Po) activity measurements and uncertainties of transfer from 210 Pb surface activity in glass objects to average radon concentration during this object exposure. The sources of 210 Pb ( 210 Po) surface activity measurement uncertainties are: errors in the calibration of energy-angle dependence of alpha-particles registration efficiency; random Poisson errors during measurements, the influence of background alpha radiation from the glass; unknown U-Ra-Th activity ratio in the glass, nonuniform 210 Po distribution on the surface of glass object. Uncertainty factors of Jacobi model for connection of 210 Pb surface activity and average radon concentration are: unknown aerosol concentration, ventilation rate, surface/volume ratio in investigated room, long term radon variations, aerosol deposition rates and errors in the age estimation of glass object. It is shown that total measurement error of surface trap retrospective technique can be decreased to 35%. The analysis of errors for grab sampling measurements, charcoal canisters and track detectors are presented in the full paper
International Nuclear Information System (INIS)
Batistoni, P.; Angelone, M.; Pillon, M.; Villari, R.; Fischer, U.; Klix, A.; Leichtle, D.; Kodeli, I.; Pohorecki, W.
2012-01-01
Two neutronics experiments have been carried out at 14 MeV neutron sources on mock-ups of the helium cooled pebble bed (HCBP) and the helium cooled lithium lead (HCLL) variants of ITER test blanket modules (TBMs). These experiments have provided an experimental validation of the calculations of the tritium production rate (TPR) in the two blanket concepts and an assessment of the uncertainties due to the uncertainties on nuclear data. This paper provides a brief summary of the HCPB experiment and then focuses in particular on the final results of the HCLL experiment. The TPR has been measured in the HCLL mock-up irradiated for long times at the Frascati 14 MeV Neutron Generator (FNG). Redundant and well-assessed experimental techniques have been used to measure the TPR by different teams for inter-comparison. Measurements of the neutron and gamma-ray spectra have also been performed. The analysis of the experiment, carried out by the MCNP code with FENDL-2.1 and JEFF-3.1.1 nuclear data libraries, and also including sensitivity/uncertainty analysis, shows good agreement between measurements and calculations, within the total uncertainty of 5.9% at 1σ level. (paper)
Sources of uncertainty in future changes in local precipitation
Energy Technology Data Exchange (ETDEWEB)
Rowell, David P. [Met Office Hadley Centre, Exeter (United Kingdom)
2012-10-15
This study considers the large uncertainty in projected changes in local precipitation. It aims to map, and begin to understand, the relative roles of uncertain modelling and natural variability, using 20-year mean data from four perturbed physics or multi-model ensembles. The largest - 280-member - ensemble illustrates a rich pattern in the varying contribution of modelling uncertainty, with similar features found using a CMIP3 ensemble (despite its limited sample size, which restricts it value in this context). The contribution of modelling uncertainty to the total uncertainty in local precipitation change is found to be highest in the deep tropics, particularly over South America, Africa, the east and central Pacific, and the Atlantic. In the moist maritime tropics, the highly uncertain modelling of sea-surface temperature changes is transmitted to a large uncertain modelling of local rainfall changes. Over tropical land and summer mid-latitude continents (and to a lesser extent, the tropical oceans), uncertain modelling of atmospheric processes, land surface processes and the terrestrial carbon cycle all appear to play an additional substantial role in driving the uncertainty of local rainfall changes. In polar regions, inter-model variability of anomalous sea ice drives an uncertain precipitation response, particularly in winter. In all these regions, there is therefore the potential to reduce the uncertainty of local precipitation changes through targeted model improvements and observational constraints. In contrast, over much of the arid subtropical and mid-latitude oceans, over Australia, and over the Sahara in winter, internal atmospheric variability dominates the uncertainty in projected precipitation changes. Here, model improvements and observational constraints will have little impact on the uncertainty of time means shorter than at least 20 years. Last, a supplementary application of the metric developed here is that it can be interpreted as a measure
Radiotherapy Dose Fractionation under Parameter Uncertainty
International Nuclear Information System (INIS)
Davison, Matt; Kim, Daero; Keller, Harald
2011-01-01
In radiotherapy, radiation is directed to damage a tumor while avoiding surrounding healthy tissue. Tradeoffs ensue because dose cannot be exactly shaped to the tumor. It is particularly important to ensure that sensitive biological structures near the tumor are not damaged more than a certain amount. Biological tissue is known to have a nonlinear response to incident radiation. The linear quadratic dose response model, which requires the specification of two clinically and experimentally observed response coefficients, is commonly used to model this effect. This model yields an optimization problem giving two different types of optimal dose sequences (fractionation schedules). Which fractionation schedule is preferred depends on the response coefficients. These coefficients are uncertainly known and may differ from patient to patient. Because of this not only the expected outcomes but also the uncertainty around these outcomes are important, and it might not be prudent to select the strategy with the best expected outcome.
Learning Reward Uncertainty in the Basal Ganglia.
Directory of Open Access Journals (Sweden)
John G Mikhael
2016-09-01
Full Text Available Learning the reliability of different sources of rewards is critical for making optimal choices. However, despite the existence of detailed theory describing how the expected reward is learned in the basal ganglia, it is not known how reward uncertainty is estimated in these circuits. This paper presents a class of models that encode both the mean reward and the spread of the rewards, the former in the difference between the synaptic weights of D1 and D2 neurons, and the latter in their sum. In the models, the tendency to seek (or avoid options with variable reward can be controlled by increasing (or decreasing the tonic level of dopamine. The models are consistent with the physiology of and synaptic plasticity in the basal ganglia, they explain the effects of dopaminergic manipulations on choices involving risks, and they make multiple experimental predictions.
Uncertainty budget in internal monostandard NAA for small and large size samples analysis
International Nuclear Information System (INIS)
Dasari, K.B.; Acharya, R.
2014-01-01
Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)
Centralizing Data Management with Considerations of Uncertainty and Information-Based Flexibility
Velu, Chander K.; Madnick, Stuart E.; Van Alstyne, Marshall W.
2013-01-01
This paper applies the theory of real options to analyze how the value of information-based flexibility should affect the decision to centralize or decentralize data management under low and high uncertainty. This study makes two main contributions. First, we show that in the presence of low uncertainty, centralization of data management decisions creates more total surplus for the firm as the similarity of business units increases. In contrast, in the presence of high uncertainty, centraliza...
Energy Technology Data Exchange (ETDEWEB)
Varlamov, V.V. [Lomonosov Moscow State University, Skobeltsyn Institute of Nuclear Physics, Moscow (Russian Federation); Davydov, A.I. [Lomonosov Moscow State University, Physics Faculty, Moscow (Russian Federation); Ishkhanov, B.S. [Lomonosov Moscow State University, Skobeltsyn Institute of Nuclear Physics, Moscow (Russian Federation); Lomonosov Moscow State University, Physics Faculty, Moscow (Russian Federation)
2017-09-15
Data on partial photoneutron reaction cross sections (γ, 1n), (γ, 2n), and (γ, 3n) for {sup 59}Co obtained in two experiments carried out at Livermore (USA) were analyzed. The sources of radiation in both experiments were the monoenergetic photon beams from the annihilation in flight of relativistic positrons. The total yield was sorted by the neutron multiplicity, taking into account the difference in the neutron energy spectra for different multiplicity. The two quoted studies differ in the method of determining the neutron. Significant systematic disagreements between the results of the two experiments exist. They are considered to be caused by large systematic uncertainties in partial cross sections, since they do not satisfy physical criteria for reliability of the data. To obtain reliable cross sections of partial and total photoneutron reactions a new method combining experimental data and theoretical evaluation was used. It is based on the experimental neutron yield cross section which is rather independent of neutron multiplicity and the transitional neutron multiplicity functions of the combined photonucleon reaction model (CPNRM). The model transitional multiplicity functions were used for the decomposition of the neutron yield cross section into the contributions of partial reactions. The results of the new evaluation noticeably differ from the partial cross sections obtained in the two experimental studies are under discussion. (orig.)
Covariance methodology applied to uncertainties in I-126 disintegration rate measurements
International Nuclear Information System (INIS)
Fonseca, K.A.; Koskinas, M.F.; Dias, M.S.
1996-01-01
The covariance methodology applied to uncertainties in 126 I disintegration rate measurements is described. Two different coincidence systems were used due to the complex decay scheme of this radionuclide. The parameters involved in the determination of the disintegration rate in each experimental system present correlated components. In this case, the conventional statistical methods to determine the uncertainties (law of propagation) result in wrong values for the final uncertainty. Therefore, use of the methodology of the covariance matrix is necessary. The data from both systems were combined taking into account all possible correlations between the partial uncertainties. (orig.)
Status of uncertainty assessment in k0-NAA measurement. Anything still missing?
International Nuclear Information System (INIS)
Borut Smodis; Tinkara Bucar
2014-01-01
Several approaches to quantifying measurement uncertainty in k 0 -based neutron activation analysis (k 0 -NAA) are reviewed, comprising the original approach, the spreadsheet approach, the dedicated computer program involving analytical calculations and the two k 0 -NAA programs available on the market. Two imperfectness in the dedicated programs are identified, their impact assessed and possible improvements presented for a concrete experimental situation. The status of uncertainty assessment in k 0 -NAA is discussed and steps for improvement are recommended. It is concluded that the present magnitude of measurement uncertainty should further be improved by making additional efforts in reducing uncertainties of the relevant nuclear constants used. (author)
Applied research in uncertainty modeling and analysis
Ayyub, Bilal
2005-01-01
Uncertainty has been a concern to engineers, managers, and scientists for many years. For a long time uncertainty has been considered synonymous with random, stochastic, statistic, or probabilistic. Since the early sixties views on uncertainty have become more heterogeneous. In the past forty years numerous tools that model uncertainty, above and beyond statistics, have been proposed by several engineers and scientists. The tool/method to model uncertainty in a specific context should really be chosen by considering the features of the phenomenon under consideration, not independent of what is known about the system and what causes uncertainty. In this fascinating overview of the field, the authors provide broad coverage of uncertainty analysis/modeling and its application. Applied Research in Uncertainty Modeling and Analysis presents the perspectives of various researchers and practitioners on uncertainty analysis and modeling outside their own fields and domain expertise. Rather than focusing explicitly on...
MO-E-BRE-01: Determination, Minimization and Communication of Uncertainties in Radiation Therapy
Energy Technology Data Exchange (ETDEWEB)
Van Dyk, J; Palta, J; Bortfeld, T; Mijnheer, B [Western University, London, ON (United Kingdom)
2014-06-15
Medical Physicists have a general understanding of uncertainties in the radiation treatment process, both with respect to dosimetry and geometry. However, there is a desire to be more quantitative about uncertainty estimation. A recent International Atomic Energy Agency (IAEA) report (about to be published) recommends that we should be as “accurate as reasonably achievable, technical and biological factors being taken into account”. Thus, a single recommendation as a goal for accuracy in radiation therapy is an oversimplification. That report also suggests that individual clinics should determine their own level of uncertainties for their specific treatment protocols. The question is “how do we implement this in clinical practice”? AAPM Monograph 35 (2011 AAPM Summer School) addressed many specific aspects of uncertainties in each of the steps of a course of radiation treatment. The intent of this symposium is: (1) to review uncertainty considerations in the entire radiation treatment process including uncertainty determination for each step and uncertainty propagation for the total process, (2) to consider aspects of robust optimization which optimizes treatment plans while protecting them against uncertainties, and (3) to describe various methods of displaying uncertainties and communicating uncertainties to the relevant professionals. While the theoretical and research aspects will also be described, the emphasis will be on the practical considerations for the medical physicist in clinical practice. Learning Objectives: To review uncertainty determination in the overall radiation treatment process. To consider uncertainty modeling and uncertainty propagation. To highlight the basic ideas and clinical potential of robust optimization procedures to generate optimal treatment plans that are not severely affected by uncertainties. To describe methods of uncertainty communication and display.
MO-E-BRE-01: Determination, Minimization and Communication of Uncertainties in Radiation Therapy
International Nuclear Information System (INIS)
Van Dyk, J; Palta, J; Bortfeld, T; Mijnheer, B
2014-01-01
Medical Physicists have a general understanding of uncertainties in the radiation treatment process, both with respect to dosimetry and geometry. However, there is a desire to be more quantitative about uncertainty estimation. A recent International Atomic Energy Agency (IAEA) report (about to be published) recommends that we should be as “accurate as reasonably achievable, technical and biological factors being taken into account”. Thus, a single recommendation as a goal for accuracy in radiation therapy is an oversimplification. That report also suggests that individual clinics should determine their own level of uncertainties for their specific treatment protocols. The question is “how do we implement this in clinical practice”? AAPM Monograph 35 (2011 AAPM Summer School) addressed many specific aspects of uncertainties in each of the steps of a course of radiation treatment. The intent of this symposium is: (1) to review uncertainty considerations in the entire radiation treatment process including uncertainty determination for each step and uncertainty propagation for the total process, (2) to consider aspects of robust optimization which optimizes treatment plans while protecting them against uncertainties, and (3) to describe various methods of displaying uncertainties and communicating uncertainties to the relevant professionals. While the theoretical and research aspects will also be described, the emphasis will be on the practical considerations for the medical physicist in clinical practice. Learning Objectives: To review uncertainty determination in the overall radiation treatment process. To consider uncertainty modeling and uncertainty propagation. To highlight the basic ideas and clinical potential of robust optimization procedures to generate optimal treatment plans that are not severely affected by uncertainties. To describe methods of uncertainty communication and display
Universal trend for heavy-ion total reaction cross sections at energies above the Coulomb barrier
International Nuclear Information System (INIS)
Tavares, O.A.P.; Medeiros, E.L.; Morcelle, V.
2010-06-01
Heavy-ion total reaction cross section measurements for more than one thousand one hundred reaction cases covering 61 target nuclei in the range 6 Li- 238 U, and 158 projectile nuclei from 2 H up to 84 Kr (mostly exotic ones) have been analysed in a systematic way by using an empirical, three-parameter formula which is applicable to cases for projectile kinetic energies above the Coulomb barrier. The analysis has shown that the average total nuclear binding energy per nucleon of the interacting nuclei and their radii are the chief quantities which describe the cross section patterns. A great number of cross section data (87%) has been quite satisfactorily reproduced by the proposed formula, therefore total reaction cross section predictions for new, not yet experimentally investigated reaction cases can be obtained within 25 percent (or much less) of uncertainty (author)
Universal trend for heavy-ion total reaction cross sections at energies above the Coulomb barrier
Energy Technology Data Exchange (ETDEWEB)
Tavares, O.A.P.; Medeiros, E.L., E-mail: emil@cbpf.b [Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil); Morcelle, V. [Universidade de Sao Paulo (IF/USP), SP (Brazil). Inst. de Fisica
2010-06-15
Heavy-ion total reaction cross section measurements for more than one thousand one hundred reaction cases covering 61 target nuclei in the range {sup 6}Li-{sup 238}U, and 158 projectile nuclei from {sup 2}H up to {sup 84}Kr (mostly exotic ones) have been analysed in a systematic way by using an empirical, three-parameter formula which is applicable to cases for projectile kinetic energies above the Coulomb barrier. The analysis has shown that the average total nuclear binding energy per nucleon of the interacting nuclei and their radii are the chief quantities which describe the cross section patterns. A great number of cross section data (87%) has been quite satisfactorily reproduced by the proposed formula, therefore total reaction cross section predictions for new, not yet experimentally investigated reaction cases can be obtained within 25 percent (or much less) of uncertainty (author)
Handling uncertainty and networked structure in robot control
Tamás, Levente
2015-01-01
This book focuses on two challenges posed in robot control by the increasing adoption of robots in the everyday human environment: uncertainty and networked communication. Part I of the book describes learning control to address environmental uncertainty. Part II discusses state estimation, active sensing, and complex scenario perception to tackle sensing uncertainty. Part III completes the book with control of networked robots and multi-robot teams. Each chapter features in-depth technical coverage and case studies highlighting the applicability of the techniques, with real robots or in simulation. Platforms include mobile ground, aerial, and underwater robots, as well as humanoid robots and robot arms. Source code and experimental data are available at http://extras.springer.com. The text gathers contributions from academic and industry experts, and offers a valuable resource for researchers or graduate students in robot control and perception. It also benefits researchers in related areas, such as computer...
Measurement, simulation and uncertainty assessment of implant heating during MRI
International Nuclear Information System (INIS)
Neufeld, E; Kuehn, S; Kuster, N; Szekely, G
2009-01-01
The heating of tissues around implants during MRI can pose severe health risks, and careful evaluation is required for leads to be labeled as MR conditionally safe. A recent interlaboratory comparison study has shown that different groups can produce widely varying results (sometimes with more than a factor of 5 difference) when performing measurements according to current guidelines. To determine the related difficulties and to derive optimized procedures, two different generic lead structures have been investigated in this study by using state-of-the-art temperature and dosimetric probes, as well as simulations for which detailed uncertainty budgets have been determined. The agreement between simulations and measurements is well within the combined uncertainty. The study revealed that the uncertainty can be kept below 17% if appropriate instrumentation and procedures are applied. Optimized experimental assessment techniques can be derived from the findings presented herein.
Measurement, simulation and uncertainty assessment of implant heating during MRI
Energy Technology Data Exchange (ETDEWEB)
Neufeld, E; Kuehn, S; Kuster, N [Foundation for Research on Information Technologies in Society (IT' IS), Zeughausstr. 43, 8004 Zurich (Switzerland); Szekely, G [Computer Vision Laboratory, Swiss Federal Institute of Technology (ETHZ), Sternwartstr 7, ETH Zentrum, 8092 Zurich (Switzerland)], E-mail: neufeld@itis.ethz.ch
2009-07-07
The heating of tissues around implants during MRI can pose severe health risks, and careful evaluation is required for leads to be labeled as MR conditionally safe. A recent interlaboratory comparison study has shown that different groups can produce widely varying results (sometimes with more than a factor of 5 difference) when performing measurements according to current guidelines. To determine the related difficulties and to derive optimized procedures, two different generic lead structures have been investigated in this study by using state-of-the-art temperature and dosimetric probes, as well as simulations for which detailed uncertainty budgets have been determined. The agreement between simulations and measurements is well within the combined uncertainty. The study revealed that the uncertainty can be kept below 17% if appropriate instrumentation and procedures are applied. Optimized experimental assessment techniques can be derived from the findings presented herein.
Effect of uncertainty parameters on graphene sheets Young's modulus prediction
International Nuclear Information System (INIS)
Sahlaoui, Habib; Sidhom Habib; Guedri, Mohamed
2013-01-01
Software based on molecular structural mechanics approach (MSMA) and using finite element method (FEM) has been developed to predict the Young's modulus of graphene sheets. Obtained results have been compared to results available in the literature and good agreement has been shown when the same values of uncertainty parameters are used. A sensibility of the models to their uncertainty parameters has been investigated using a stochastic finite element method (SFEM). The different values of the used uncertainty parameters, such as molecular mechanics force field constants k_r and k_θ, thickness (t) of a graphene sheet and length ( L_B) of a carbon carbon bonds, have been collected from the literature. Strong sensibilities of 91% to the thickness and of 21% to the stretching force (k_r) have been shown. The results justify the great difference between Young's modulus predicted values of the graphene sheets and their large disagreement with experimental results.
International Nuclear Information System (INIS)
Greenspan, E.
1982-01-01
This chapter presents the mathematical basis for sensitivity functions, discusses their physical meaning and information they contain, and clarifies a number of issues concerning their application, including the definition of group sensitivities, the selection of sensitivity functions to be included in the analysis, and limitations of sensitivity theory. Examines the theoretical foundation; criticality reset sensitivities; group sensitivities and uncertainties; selection of sensitivities included in the analysis; and other uses and limitations of sensitivity functions. Gives the theoretical formulation of sensitivity functions pertaining to ''as-built'' designs for performance parameters of the form of ratios of linear flux functionals (such as reaction-rate ratios), linear adjoint functionals, bilinear functions (such as reactivity worth ratios), and for reactor reactivity. Offers a consistent procedure for reducing energy-dependent or fine-group sensitivities and uncertainties to broad group sensitivities and uncertainties. Provides illustrations of sensitivity functions as well as references to available compilations of such functions and of total sensitivities. Indicates limitations of sensitivity theory originating from the fact that this theory is based on a first-order perturbation theory
Uncertainty of the calibration factor
International Nuclear Information System (INIS)
1995-01-01
According to present definitions, an error is the difference between a measured value and the ''true'' value. Thus an error has both a numerical value and a sign. In contrast, the uncertainly associated with a measurement is a parameter that characterizes the dispersion of the values ''that could reasonably be attributed to the measurand''. This parameter is normally an estimated standard deviation. An uncertainty, therefore, has no known sign and is usually assumed to be symmetrical. It is a measure of our lack of exact knowledge, after all recognized ''systematic'' effects have been eliminated by applying appropriate corrections. If errors were known exactly, the true value could be determined and there would be no problem left. In reality, errors are estimated in the best possible way and corrections made for them. Therefore, after application of all known corrections, errors need no further consideration (their expectation value being zero) and the only quantities of interest are uncertainties. 3 refs, 2 figs
Quantifying the uncertainty in heritability.
Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph
2014-05-01
The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.
Uncertainty in hydrological change modelling
DEFF Research Database (Denmark)
Seaby, Lauren Paige
applied at the grid scale. Flux and state hydrological outputs which integrate responses over time and space showed more sensitivity to precipitation mean spatial biases and less so on extremes. In the investigated catchments, the projected change of groundwater levels and basin discharge between current......Hydrological change modelling methodologies generally use climate models outputs to force hydrological simulations under changed conditions. There are nested sources of uncertainty throughout this methodology, including choice of climate model and subsequent bias correction methods. This Ph.......D. study evaluates the uncertainty of the impact of climate change in hydrological simulations given multiple climate models and bias correction methods of varying complexity. Three distribution based scaling methods (DBS) were developed and benchmarked against a more simplistic and commonly used delta...
Visualizing Summary Statistics and Uncertainty
Potter, K.
2010-08-12
The graphical depiction of uncertainty information is emerging as a problem of great importance. Scientific data sets are not considered complete without indications of error, accuracy, or levels of confidence. The visual portrayal of this information is a challenging task. This work takes inspiration from graphical data analysis to create visual representations that show not only the data value, but also important characteristics of the data including uncertainty. The canonical box plot is reexamined and a new hybrid summary plot is presented that incorporates a collection of descriptive statistics to highlight salient features of the data. Additionally, we present an extension of the summary plot to two dimensional distributions. Finally, a use-case of these new plots is presented, demonstrating their ability to present high-level overviews as well as detailed insight into the salient features of the underlying data distribution. © 2010 The Eurographics Association and Blackwell Publishing Ltd.
Visualizing Summary Statistics and Uncertainty
Potter, K.; Kniss, J.; Riesenfeld, R.; Johnson, C.R.
2010-01-01
The graphical depiction of uncertainty information is emerging as a problem of great importance. Scientific data sets are not considered complete without indications of error, accuracy, or levels of confidence. The visual portrayal of this information is a challenging task. This work takes inspiration from graphical data analysis to create visual representations that show not only the data value, but also important characteristics of the data including uncertainty. The canonical box plot is reexamined and a new hybrid summary plot is presented that incorporates a collection of descriptive statistics to highlight salient features of the data. Additionally, we present an extension of the summary plot to two dimensional distributions. Finally, a use-case of these new plots is presented, demonstrating their ability to present high-level overviews as well as detailed insight into the salient features of the underlying data distribution. © 2010 The Eurographics Association and Blackwell Publishing Ltd.
Statistical uncertainties and unrecognized relationships
International Nuclear Information System (INIS)
Rankin, J.P.
1985-01-01
Hidden relationships in specific designs directly contribute to inaccuracies in reliability assessments. Uncertainty factors at the system level may sometimes be applied in attempts to compensate for the impact of such unrecognized relationships. Often uncertainty bands are used to relegate unknowns to a miscellaneous category of low-probability occurrences. However, experience and modern analytical methods indicate that perhaps the dominant, most probable and significant events are sometimes overlooked in statistical reliability assurances. The author discusses the utility of two unique methods of identifying the otherwise often unforeseeable system interdependencies for statistical evaluations. These methods are sneak circuit analysis and a checklist form of common cause failure analysis. Unless these techniques (or a suitable equivalent) are also employed along with the more widely-known assurance tools, high reliability of complex systems may not be adequately assured. This concern is indicated by specific illustrations. 8 references, 5 figures
The uncertainty budget in pharmaceutical industry
DEFF Research Database (Denmark)
Heydorn, Kaj
of their uncertainty, exactly as described in GUM [2]. Pharmaceutical industry has therefore over the last 5 years shown increasing interest in accreditation according to ISO 17025 [3], and today uncertainty budgets are being developed for all so-called critical measurements. The uncertainty of results obtained...... that the uncertainty of a particular result is independent of the method used for its estimation. Several examples of uncertainty budgets for critical parameters based on the bottom-up procedure will be discussed, and it will be shown how the top-down method is used as a means of verifying uncertainty budgets, based...
Improvement of uncertainty relations for mixed states
International Nuclear Information System (INIS)
Park, Yong Moon
2005-01-01
We study a possible improvement of uncertainty relations. The Heisenberg uncertainty relation employs commutator of a pair of conjugate observables to set the limit of quantum measurement of the observables. The Schroedinger uncertainty relation improves the Heisenberg uncertainty relation by adding the correlation in terms of anti-commutator. However both relations are insensitive whether the state used is pure or mixed. We improve the uncertainty relations by introducing additional terms which measure the mixtureness of the state. For the momentum and position operators as conjugate observables and for the thermal state of quantum harmonic oscillator, it turns out that the equalities in the improved uncertainty relations hold
Adjoint-Based Uncertainty Quantification with MCNP
Energy Technology Data Exchange (ETDEWEB)
Seifried, Jeffrey E. [Univ. of California, Berkeley, CA (United States)
2011-09-01
This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.
An uncertainty analysis using the NRPB accident consequence code Marc
International Nuclear Information System (INIS)
Jones, J.A.; Crick, M.J.; Simmonds, J.R.
1991-01-01
This paper describes an uncertainty analysis of MARC calculations of the consequences of accidental releases of radioactive materials to atmosphere. A total of 98 parameters describing the transfer of material through the environment to man, the doses received, and the health effects resulting from these doses, was considered. The uncertainties in the numbers of early and late health effects, numbers of people affected by countermeasures, the amounts of food restricted and the economic costs of the accident were estimated. This paper concentrates on the results for early death and fatal cancer for a large hypothetical release from a PWR
Conditional Betas and Investor Uncertainty
Fernando D. Chague
2013-01-01
We derive theoretical expressions for market betas from a rational expectation equilibrium model where the representative investor does not observe if the economy is in a recession or an expansion. Market betas in this economy are time-varying and related to investor uncertainty about the state of the economy. The dynamics of betas will also vary across assets according to the assets' cash-flow structure. In a calibration exercise, we show that value and growth firms have cash-flow structures...
Aggregate Uncertainty, Money and Banking
Hongfei Sun
2006-01-01
This paper studies the problem of monitoring the monitor in a model of money and banking with aggregate uncertainty. It shows that when inside money is required as a means of bank loan repayment, a market of inside money is entailed at the repayment stage and generates information-revealing prices that perfectly discipline the bank. The incentive problem of a bank is costlessly overcome simply by involving inside money in repayment. Inside money distinguishes itself from outside money by its ...
Decision Under Uncertainty in Diagnosis
Kalme, Charles I.
2013-01-01
This paper describes the incorporation of uncertainty in diagnostic reasoning based on the set covering model of Reggia et. al. extended to what in the Artificial Intelligence dichotomy between deep and compiled (shallow, surface) knowledge based diagnosis may be viewed as the generic form at the compiled end of the spectrum. A major undercurrent in this is advocating the need for a strong underlying model and an integrated set of support tools for carrying such a model in order to deal with ...
Uncertainty analysis for hot channel
International Nuclear Information System (INIS)
Panka, I.; Kereszturi, A.
2006-01-01
The fulfillment of the safety analysis acceptance criteria is usually evaluated by separate hot channel calculations using the results of neutronic or/and thermo hydraulic system calculations. In case of an ATWS event (inadvertent withdrawal of control assembly), according to the analysis, a number of fuel rods are experiencing DNB for a longer time and must be regarded as failed. Their number must be determined for a further evaluation of the radiological consequences. In the deterministic approach, the global power history must be multiplied by different hot channel factors (kx) taking into account the radial power peaking factors for each fuel pin. If DNB occurs it is necessary to perform a few number of hot channel calculations to determine the limiting kx leading just to DNB and fuel failure (the conservative DNBR limit is 1.33). Knowing the pin power distribution from the core design calculation, the number of failed fuel pins can be calculated. The above procedure can be performed by conservative assumptions (e.g. conservative input parameters in the hot channel calculations), as well. In case of hot channel uncertainty analysis, the relevant input parameters (k x, mass flow, inlet temperature of the coolant, pin average burnup, initial gap size, selection of power history influencing the gap conductance value) of hot channel calculations and the DNBR limit are varied considering the respective uncertainties. An uncertainty analysis methodology was elaborated combining the response surface method with the one sided tolerance limit method of Wilks. The results of deterministic and uncertainty hot channel calculations are compared regarding to the number of failed fuel rods, max. temperature of the clad surface and max. temperature of the fuel (Authors)
Forecast Accuracy Uncertainty and Momentum
Bing Han; Dong Hong; Mitch Warachka
2009-01-01
We demonstrate that stock price momentum and earnings momentum can result from uncertainty surrounding the accuracy of cash flow forecasts. Our model has multiple information sources issuing cash flow forecasts for a stock. The investor combines these forecasts into an aggregate cash flow estimate that has minimal mean-squared forecast error. This aggregate estimate weights each cash flow forecast by the estimated accuracy of its issuer, which is obtained from their past forecast errors. Mome...
Microeconomic Uncertainty and Macroeconomic Indeterminacy
Fagnart, Jean-François; Pierrard, Olivier; Sneessens, Henri
2005-01-01
The paper proposes a stylized intertemporal macroeconomic model wherein the combination of decentralized trading and microeconomic uncertainty (taking the form of privately observed and uninsured idiosyncratic shocks) creates an information problem between agents and generates indeterminacy of the macroeconomic equilibrium. For a given value of the economic fundamentals, the economy admits a continuum of equilibria that can be indexed by the sales expectations of firms at the time of investme...
LOFT differential pressure uncertainty analysis
International Nuclear Information System (INIS)
Evans, R.P.; Biladeau, G.L.; Quinn, P.A.
1977-03-01
A performance analysis of the LOFT differential pressure (ΔP) measurement is presented. Along with completed descriptions of test programs and theoretical studies that have been conducted on the ΔP, specific sources of measurement uncertainty are identified, quantified, and combined to provide an assessment of the ability of this measurement to satisfy the SDD 1.4.1C (June 1975) requirement of measurement of differential pressure
Knowledge, decision making, and uncertainty
International Nuclear Information System (INIS)
Fox, J.
1986-01-01
Artificial intelligence (AI) systems depend heavily upon the ability to make decisions. Decisions require knowledge, yet there is no knowledge-based theory of decision making. To the extent that AI uses a theory of decision-making it adopts components of the traditional statistical view in which choices are made by maximizing some function of the probabilities of decision options. A knowledge-based scheme for reasoning about uncertainty is proposed, which extends the traditional framework but is compatible with it
Accommodating Uncertainty in Prior Distributions
Energy Technology Data Exchange (ETDEWEB)
Picard, Richard Roy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vander Wiel, Scott Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-01-19
A fundamental premise of Bayesian methodology is that a priori information is accurately summarized by a single, precisely de ned prior distribution. In many cases, especially involving informative priors, this premise is false, and the (mis)application of Bayes methods produces posterior quantities whose apparent precisions are highly misleading. We examine the implications of uncertainty in prior distributions, and present graphical methods for dealing with them.
Managing project risks and uncertainties
Directory of Open Access Journals (Sweden)
Mike Mentis
2015-01-01
Full Text Available This article considers threats to a project slipping on budget, schedule and fit-for-purpose. Threat is used here as the collective for risks (quantifiable bad things that can happen and uncertainties (poorly or not quantifiable bad possible events. Based on experience with projects in developing countries this review considers that (a project slippage is due to uncertainties rather than risks, (b while eventuation of some bad things is beyond control, managed execution and oversight are still the primary means to keeping within budget, on time and fit-for-purpose, (c improving project delivery is less about bigger and more complex and more about coordinated focus, effectiveness and developing thought-out heuristics, and (d projects take longer and cost more partly because threat identification is inaccurate, the scope of identified threats is too narrow, and the threat assessment product is not integrated into overall project decision-making and execution. Almost by definition, what is poorly known is likely to cause problems. Yet it is not just the unquantifiability and intangibility of uncertainties causing project slippage, but that they are insufficiently taken into account in project planning and execution that cause budget and time overruns. Improving project performance requires purpose-driven and managed deployment of scarce seasoned professionals. This can be aided with independent oversight by deeply experienced panelists who contribute technical insights and can potentially show that diligence is seen to be done.
Chemical model reduction under uncertainty
Malpica Galassi, Riccardo
2017-03-06
A general strategy for analysis and reduction of uncertain chemical kinetic models is presented, and its utility is illustrated in the context of ignition of hydrocarbon fuel–air mixtures. The strategy is based on a deterministic analysis and reduction method which employs computational singular perturbation analysis to generate simplified kinetic mechanisms, starting from a detailed reference mechanism. We model uncertain quantities in the reference mechanism, namely the Arrhenius rate parameters, as random variables with prescribed uncertainty factors. We propagate this uncertainty to obtain the probability of inclusion of each reaction in the simplified mechanism. We propose probabilistic error measures to compare predictions from the uncertain reference and simplified models, based on the comparison of the uncertain dynamics of the state variables, where the mixture entropy is chosen as progress variable. We employ the construction for the simplification of an uncertain mechanism in an n-butane–air mixture homogeneous ignition case, where a 176-species, 1111-reactions detailed kinetic model for the oxidation of n-butane is used with uncertainty factors assigned to each Arrhenius rate pre-exponential coefficient. This illustration is employed to highlight the utility of the construction, and the performance of a family of simplified models produced depending on chosen thresholds on importance and marginal probabilities of the reactions.
Uncertainty quantification in capacitive RF MEMS switches
Pax, Benjamin J.
Development of radio frequency micro electrical-mechanical systems (RF MEMS) has led to novel approaches to implement electrical circuitry. The introduction of capacitive MEMS switches, in particular, has shown promise in low-loss, low-power devices. However, the promise of MEMS switches has not yet been completely realized. RF-MEMS switches are known to fail after only a few months of operation, and nominally similar designs show wide variability in lifetime. Modeling switch operation using nominal or as-designed parameters cannot predict the statistical spread in the number of cycles to failure, and probabilistic methods are necessary. A Bayesian framework for calibration, validation and prediction offers an integrated approach to quantifying the uncertainty in predictions of MEMS switch performance. The objective of this thesis is to use the Bayesian framework to predict the creep-related deflection of the PRISM RF-MEMS switch over several thousand hours of operation. The PRISM switch used in this thesis is the focus of research at Purdue's PRISM center, and is a capacitive contacting RF-MEMS switch. It employs a fixed-fixed nickel membrane which is electrostatically actuated by applying voltage between the membrane and a pull-down electrode. Creep plays a central role in the reliability of this switch. The focus of this thesis is on the creep model, which is calibrated against experimental data measured for a frog-leg varactor fabricated and characterized at Purdue University. Creep plasticity is modeled using plate element theory with electrostatic forces being generated using either parallel plate approximations where appropriate, or solving for the full 3D potential field. For the latter, structure-electrostatics interaction is determined through immersed boundary method. A probabilistic framework using generalized polynomial chaos (gPC) is used to create surrogate models to mitigate the costly full physics simulations, and Bayesian calibration and forward
International Nuclear Information System (INIS)
Varlamov, V.V.; Efimkin, N.G.; Ishkhanov, B.S.; Sapunenko, V.V.; Stepanov, M.E.
1993-01-01
The method based on the method of reduction is proposed for the evaluation of photonuclear reaction cross sections have been obtained at significant systematic uncertainties (different apparatus functions, calibration and normalization uncertainties). The evaluation method consists of using the real apparatus function (photon spectrum) of each individual experiment to reduce the data to a representation generated by an apparatus function of better quality. The task is to find the most reasonably achievable monoenergetic representation (MRAMR) of the information about cross section contained in different experiment observables and to take into account the experimental uncertainties of calibration and normalization procedures. The method was used to obtain the evaluated total photoneutron (γ, xn) reaction cross sections for 16 O, 28 Si, nat Cu, 141 Pr, and 208 Pb are presented. 79 refs., 19 figs., 6 tabs
Soize, Christian
2017-01-01
This book presents the fundamental notions and advanced mathematical tools in the stochastic modeling of uncertainties and their quantification for large-scale computational models in sciences and engineering. In particular, it focuses in parametric uncertainties, and non-parametric uncertainties with applications from the structural dynamics and vibroacoustics of complex mechanical systems, from micromechanics and multiscale mechanics of heterogeneous materials. Resulting from a course developed by the author, the book begins with a description of the fundamental mathematical tools of probability and statistics that are directly useful for uncertainty quantification. It proceeds with a well carried out description of some basic and advanced methods for constructing stochastic models of uncertainties, paying particular attention to the problem of calibrating and identifying a stochastic model of uncertainty when experimental data is available. < This book is intended to be a graduate-level textbook for stu...