WorldWideScience

Sample records for total experimental uncertainty

  1. Code development for eigenvalue total sensitivity analysis and total uncertainty analysis

    International Nuclear Information System (INIS)

    Wan, Chenghui; Cao, Liangzhi; Wu, Hongchun; Zu, Tiejun; Shen, Wei

    2015-01-01

    Highlights: • We develop a new code for total sensitivity and uncertainty analysis. • The implicit effects of cross sections can be considered. • The results of our code agree well with TSUNAMI-1D. • Detailed analysis for origins of implicit effects is performed. - Abstract: The uncertainties of multigroup cross sections notably impact eigenvalue of neutron-transport equation. We report on a total sensitivity analysis and total uncertainty analysis code named UNICORN that has been developed by applying the direct numerical perturbation method and statistical sampling method. In order to consider the contributions of various basic cross sections and the implicit effects which are indirect results of multigroup cross sections through resonance self-shielding calculation, an improved multigroup cross-section perturbation model is developed. The DRAGON 4.0 code, with application of WIMSD-4 format library, is used by UNICORN to carry out the resonance self-shielding and neutron-transport calculations. In addition, the bootstrap technique has been applied to the statistical sampling method in UNICORN to obtain much steadier and more reliable uncertainty results. The UNICORN code has been verified against TSUNAMI-1D by analyzing the case of TMI-1 pin-cell. The numerical results show that the total uncertainty of eigenvalue caused by cross sections can reach up to be about 0.72%. Therefore the contributions of the basic cross sections and their implicit effects are not negligible

  2. Total error vs. measurement uncertainty: revolution or evolution?

    Science.gov (United States)

    Oosterhuis, Wytze P; Theodorsson, Elvar

    2016-02-01

    The first strategic EFLM conference "Defining analytical performance goals, 15 years after the Stockholm Conference" was held in the autumn of 2014 in Milan. It maintained the Stockholm 1999 hierarchy of performance goals but rearranged them and established five task and finish groups to work on topics related to analytical performance goals including one on the "total error" theory. Jim Westgard recently wrote a comprehensive overview of performance goals and of the total error theory critical of the results and intentions of the Milan 2014 conference. The "total error" theory originated by Jim Westgard and co-workers has a dominating influence on the theory and practice of clinical chemistry but is not accepted in other fields of metrology. The generally accepted uncertainty theory, however, suffers from complex mathematics and conceived impracticability in clinical chemistry. The pros and cons of the total error theory need to be debated, making way for methods that can incorporate all relevant causes of uncertainty when making medical diagnoses and monitoring treatment effects. This development should preferably proceed not as a revolution but as an evolution.

  3. Assessment and characterization of the total geometric uncertainty in Gamma Knife radiosurgery using polymer gels

    International Nuclear Information System (INIS)

    Moutsatsos, A.; Karaiskos, P.; Pantelis, E.; Georgiou, E.; Petrokokkinos, L.; Sakelliou, L.; Torrens, M.; Seimenis, I.

    2013-01-01

    Purpose: This work proposes and implements an experimental methodology, based on polymer gels, for assessing the total geometric uncertainty and characterizing its contributors in Gamma Knife (GK) radiosurgery. Methods: A treatment plan consisting of 26, 4-mm GK single shot dose distributions, covering an extended region of the Leksell stereotactic space, was prepared and delivered to a polymer gel filled polymethyl methacrylate (PMMA) head phantom (16 cm diameter) used to accurately reproduce every link in the GK treatment chain. The center of each shot served as a “control point” in the assessment of the GK total geometric uncertainty, which depends on (a) the spatial dose delivery uncertainty of the PERFEXION GK unit used in this work, (b) the spatial distortions inherent in MR images commonly used for target delineation, and (c) the geometric uncertainty contributor associated with the image registration procedure performed by the Leksell GammaPlan (LGP) treatment planning system (TPS), in the case that registration is directly based on the apparent fiducial locations depicted in each MR image by the N-shaped rods on the Leksell localization box. The irradiated phantom was MR imaged at 1.5 T employing a T2-weighted pulse sequence. Four image series were acquired by alternating the frequency encoding axis and reversing the read gradient polarity, thus allowing the characterization of the MR-related spatial distortions. Results: MR spatial distortions stemming from main field (B 0 ) inhomogeneity as well as from susceptibility and chemical shift phenomena (also known as sequence dependent distortions) were found to be of the order of 0.5 mm, while those owing to gradient nonlinearities (also known as sequence independent distortions) were found to increase with distance from the MR scanner isocenter extending up to 0.47 mm at an Euclidean distance of 69.6 mm. Regarding the LGP image registration procedure, the corresponding average contribution to the total

  4. Assessment and characterization of the total geometric uncertainty in Gamma Knife radiosurgery using polymer gels.

    Science.gov (United States)

    Moutsatsos, A; Karaiskos, P; Petrokokkinos, L; Sakelliou, L; Pantelis, E; Georgiou, E; Torrens, M; Seimenis, I

    2013-03-01

    This work proposes and implements an experimental methodology, based on polymer gels, for assessing the total geometric uncertainty and characterizing its contributors in Gamma Knife (GK) radiosurgery. A treatment plan consisting of 26, 4-mm GK single shot dose distributions, covering an extended region of the Leksell stereotactic space, was prepared and delivered to a polymer gel filled polymethyl methacrylate (PMMA) head phantom (16 cm diameter) used to accurately reproduce every link in the GK treatment chain. The center of each shot served as a "control point" in the assessment of the GK total geometric uncertainty, which depends on (a) the spatial dose delivery uncertainty of the PERFEXION GK unit used in this work, (b) the spatial distortions inherent in MR images commonly used for target delineation, and (c) the geometric uncertainty contributor associated with the image registration procedure performed by the Leksell GammaPlan (LGP) treatment planning system (TPS), in the case that registration is directly based on the apparent fiducial locations depicted in each MR image by the N-shaped rods on the Leksell localization box. The irradiated phantom was MR imaged at 1.5 T employing a T2-weighted pulse sequence. Four image series were acquired by alternating the frequency encoding axis and reversing the read gradient polarity, thus allowing the characterization of the MR-related spatial distortions. MR spatial distortions stemming from main field (B0) inhomogeneity as well as from susceptibility and chemical shift phenomena (also known as sequence dependent distortions) were found to be of the order of 0.5 mm, while those owing to gradient nonlinearities (also known as sequence independent distortions) were found to increase with distance from the MR scanner isocenter extending up to 0.47 mm at an Euclidean distance of 69.6 mm. Regarding the LGP image registration procedure, the corresponding average contribution to the total geometric uncertainty ranged from

  5. Helium Mass Spectrometer Leak Detection: A Method to Quantify Total Measurement Uncertainty

    Science.gov (United States)

    Mather, Janice L.; Taylor, Shawn C.

    2015-01-01

    In applications where leak rates of components or systems are evaluated against a leak rate requirement, the uncertainty of the measured leak rate must be included in the reported result. However, in the helium mass spectrometer leak detection method, the sensitivity, or resolution, of the instrument is often the only component of the total measurement uncertainty noted when reporting results. To address this shortfall, a measurement uncertainty analysis method was developed that includes the leak detector unit's resolution, repeatability, hysteresis, and drift, along with the uncertainty associated with the calibration standard. In a step-wise process, the method identifies the bias and precision components of the calibration standard, the measurement correction factor (K-factor), and the leak detector unit. Together these individual contributions to error are combined and the total measurement uncertainty is determined using the root-sum-square method. It was found that the precision component contributes more to the total uncertainty than the bias component, but the bias component is not insignificant. For helium mass spectrometer leak rate tests where unit sensitivity alone is not enough, a thorough evaluation of the measurement uncertainty such as the one presented herein should be performed and reported along with the leak rate value.

  6. SU-E-J-159: Analysis of Total Imaging Uncertainty in Respiratory-Gated Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, J; Okuda, T [Toyota memorial hospital, Toyota, Aichi (Japan); Sakaino, S; Yokota, N [Suzukake central hospital, Hamamatsu, Shizuoka (Japan)

    2015-06-15

    Purpose: In respiratory-gated radiotherapy, the gating phase during treatment delivery needs to coincide with the corresponding phase determined during the treatment plan. However, because radiotherapy is performed based on the image obtained for the treatment plan, the time delay, motion artifact, volume effect, and resolution in the images are uncertain. Thus, imaging uncertainty is the most basic factor that affects the localization accuracy. Therefore, these uncertainties should be analyzed. This study aims to analyze the total imaging uncertainty in respiratory-gated radiotherapy. Methods: Two factors of imaging uncertainties related to respiratory-gated radiotherapy were analyzed. First, CT image was used to determine the target volume and 4D treatment planning for the Varian Realtime Position Management (RPM) system. Second, an X-ray image was acquired for image-guided radiotherapy (IGRT) for the BrainLAB ExacTrac system. These factors were measured using a respiratory gating phantom. The conditions applied during phantom operation were as follows: respiratory wave form, sine curve; respiratory cycle, 4 s; phantom target motion amplitude, 10, 20, and 29 mm (which is maximum phantom longitudinal motion). The target and cylindrical marker implanted in the phantom coverage of the CT images was measured and compared with the theoretically calculated coverage from the phantom motion. The theoretical position of the cylindrical marker implanted in the phantom was compared with that acquired from the X-ray image. The total imaging uncertainty was analyzed from these two factors. Results: In the CT image, the uncertainty between the target and cylindrical marker’s actual coverage and the coverage of CT images was 1.19 mm and 2.50mm, respectively. In the Xray image, the uncertainty was 0.39 mm. The total imaging uncertainty from the two factors was 1.62mm. Conclusion: The total imaging uncertainty in respiratory-gated radiotherapy was clinically acceptable. However

  7. SU-E-J-159: Analysis of Total Imaging Uncertainty in Respiratory-Gated Radiotherapy

    International Nuclear Information System (INIS)

    Suzuki, J; Okuda, T; Sakaino, S; Yokota, N

    2015-01-01

    Purpose: In respiratory-gated radiotherapy, the gating phase during treatment delivery needs to coincide with the corresponding phase determined during the treatment plan. However, because radiotherapy is performed based on the image obtained for the treatment plan, the time delay, motion artifact, volume effect, and resolution in the images are uncertain. Thus, imaging uncertainty is the most basic factor that affects the localization accuracy. Therefore, these uncertainties should be analyzed. This study aims to analyze the total imaging uncertainty in respiratory-gated radiotherapy. Methods: Two factors of imaging uncertainties related to respiratory-gated radiotherapy were analyzed. First, CT image was used to determine the target volume and 4D treatment planning for the Varian Realtime Position Management (RPM) system. Second, an X-ray image was acquired for image-guided radiotherapy (IGRT) for the BrainLAB ExacTrac system. These factors were measured using a respiratory gating phantom. The conditions applied during phantom operation were as follows: respiratory wave form, sine curve; respiratory cycle, 4 s; phantom target motion amplitude, 10, 20, and 29 mm (which is maximum phantom longitudinal motion). The target and cylindrical marker implanted in the phantom coverage of the CT images was measured and compared with the theoretically calculated coverage from the phantom motion. The theoretical position of the cylindrical marker implanted in the phantom was compared with that acquired from the X-ray image. The total imaging uncertainty was analyzed from these two factors. Results: In the CT image, the uncertainty between the target and cylindrical marker’s actual coverage and the coverage of CT images was 1.19 mm and 2.50mm, respectively. In the Xray image, the uncertainty was 0.39 mm. The total imaging uncertainty from the two factors was 1.62mm. Conclusion: The total imaging uncertainty in respiratory-gated radiotherapy was clinically acceptable. However

  8. Uncertainty in techno-economic estimates of cellulosic ethanol production due to experimental measurement uncertainty

    Directory of Open Access Journals (Sweden)

    Vicari Kristin J

    2012-04-01

    Full Text Available Abstract Background Cost-effective production of lignocellulosic biofuels remains a major financial and technical challenge at the industrial scale. A critical tool in biofuels process development is the techno-economic (TE model, which calculates biofuel production costs using a process model and an economic model. The process model solves mass and energy balances for each unit, and the economic model estimates capital and operating costs from the process model based on economic assumptions. The process model inputs include experimental data on the feedstock composition and intermediate product yields for each unit. These experimental yield data are calculated from primary measurements. Uncertainty in these primary measurements is propagated to the calculated yields, to the process model, and ultimately to the economic model. Thus, outputs of the TE model have a minimum uncertainty associated with the uncertainty in the primary measurements. Results We calculate the uncertainty in the Minimum Ethanol Selling Price (MESP estimate for lignocellulosic ethanol production via a biochemical conversion process: dilute sulfuric acid pretreatment of corn stover followed by enzymatic hydrolysis and co-fermentation of the resulting sugars to ethanol. We perform a sensitivity analysis on the TE model and identify the feedstock composition and conversion yields from three unit operations (xylose from pretreatment, glucose from enzymatic hydrolysis, and ethanol from fermentation as the most important variables. The uncertainty in the pretreatment xylose yield arises from multiple measurements, whereas the glucose and ethanol yields from enzymatic hydrolysis and fermentation, respectively, are dominated by a single measurement: the fraction of insoluble solids (fIS in the biomass slurries. Conclusions We calculate a $0.15/gal uncertainty in MESP from the TE model due to uncertainties in primary measurements. This result sets a lower bound on the error bars of

  9. Impact of measurement uncertainty from experimental load distribution factors on bridge load rating

    Science.gov (United States)

    Gangone, Michael V.; Whelan, Matthew J.

    2018-03-01

    Load rating and testing of highway bridges is important in determining the capacity of the structure. Experimental load rating utilizes strain transducers placed at critical locations of the superstructure to measure normal strains. These strains are then used in computing diagnostic performance measures (neutral axis of bending, load distribution factor) and ultimately a load rating. However, it has been shown that experimentally obtained strain measurements contain uncertainties associated with the accuracy and precision of the sensor and sensing system. These uncertainties propagate through to the diagnostic indicators that in turn transmit into the load rating calculation. This paper will analyze the effect that measurement uncertainties have on the experimental load rating results of a 3 span multi-girder/stringer steel and concrete bridge. The focus of this paper will be limited to the uncertainty associated with the experimental distribution factor estimate. For the testing discussed, strain readings were gathered at the midspan of each span of both exterior girders and the center girder. Test vehicles of known weight were positioned at specified locations on each span to generate maximum strain response for each of the five girders. The strain uncertainties were used in conjunction with a propagation formula developed by the authors to determine the standard uncertainty in the distribution factor estimates. This distribution factor uncertainty is then introduced into the load rating computation to determine the possible range of the load rating. The results show the importance of understanding measurement uncertainty in experimental load testing.

  10. Total sensitivity and uncertainty analysis for LWR pin-cells with improved UNICORN code

    International Nuclear Information System (INIS)

    Wan, Chenghui; Cao, Liangzhi; Wu, Hongchun; Shen, Wei

    2017-01-01

    Highlights: • A new model is established for the total sensitivity and uncertainty analysis. • The NR approximation applied in S&U analysis can be avoided by the new model. • Sensitivity and uncertainty analysis is performed to PWR pin-cells by the new model. • The effects of the NR approximation for the PWR pin-cells are quantified. - Abstract: In this paper, improvements to the multigroup cross-section perturbation model have been proposed and applied in the self-developed UNICORN code, which is capable of performing the total sensitivity and total uncertainty analysis for the neutron-physics calculations by applying the direct numerical perturbation method and the statistical sampling method respectively. The narrow resonance (NR) approximation was applied in the multigroup cross-section perturbation model, implemented in UNICORN. As improvements to the NR approximation to refine the multigroup cross-section perturbation model, an ultrafine-group cross-section perturbation model has been established, in which the actual perturbations are applied to the ultrafine-group cross-section library and the reconstructions of the resonance cross sections are performed by solving the neutron slowing-down equation. The total sensitivity and total uncertainty analysis were then applied to the LWR pin-cells, using both the multigroup and the ultrafine-group cross-section perturbation models. The numerical results show that the NR approximation overestimates the relative sensitivity coefficients and the corresponding uncertainty results for the LWR pin-cells, and the effects of the NR approximation are significant for σ_(_n_,_γ_) and σ_(_n_,_e_l_a_s_) of "2"3"8U. Therefore, the effects of the NR approximation applied in the total sensitivity and total uncertainty analysis for the neutron-physics calculations of LWR should be taken into account.

  11. Resolving uncertainty in chemical speciation determinations

    Science.gov (United States)

    Smith, D. Scott; Adams, Nicholas W. H.; Kramer, James R.

    1999-10-01

    Speciation determinations involve uncertainty in system definition and experimentation. Identification of appropriate metals and ligands from basic chemical principles, analytical window considerations, types of species and checking for consistency in equilibrium calculations are considered in system definition uncertainty. A systematic approach to system definition limits uncertainty in speciation investigations. Experimental uncertainty is discussed with an example of proton interactions with Suwannee River fulvic acid (SRFA). A Monte Carlo approach was used to estimate uncertainty in experimental data, resulting from the propagation of uncertainties in electrode calibration parameters and experimental data points. Monte Carlo simulations revealed large uncertainties present at high (>9-10) and low (monoprotic ligands. Least-squares fit the data with 21 sites, whereas linear programming fit the data equally well with 9 sites. Multiresponse fitting, involving simultaneous fluorescence and pH measurements, improved model discrimination. Deconvolution of the excitation versus emission fluorescence surface for SRFA establishes a minimum of five sites. Diprotic sites are also required for the five fluorescent sites, and one non-fluorescent monoprotic site was added to accommodate the pH data. Consistent with greater complexity, the multiresponse method had broader confidence limits than the uniresponse methods, but corresponded better with the accepted total carboxylic content for SRFA. Overall there was a 40% standard deviation in total carboxylic content for the multiresponse fitting, versus 10% and 1% for least-squares and linear programming, respectively.

  12. Total Measurement Uncertainty for the Plutonium Finishing Plant (PFP) Segmented Gamma Scan Assay System

    CERN Document Server

    Fazzari, D M

    2001-01-01

    This report presents the results of an evaluation of the Total Measurement Uncertainty (TMU) for the Canberra manufactured Segmented Gamma Scanner Assay System (SGSAS) as employed at the Hanford Plutonium Finishing Plant (PFP). In this document, TMU embodies the combined uncertainties due to all of the individual random and systematic sources of measurement uncertainty. It includes uncertainties arising from corrections and factors applied to the analysis of transuranic waste to compensate for inhomogeneities and interferences from the waste matrix and radioactive components. These include uncertainty components for any assumptions contained in the calibration of the system or computation of the data. Uncertainties are propagated at 1 sigma. The final total measurement uncertainty value is reported at the 95% confidence level. The SGSAS is a gamma assay system that is used to assay plutonium and uranium waste. The SGSAS system can be used in a stand-alone mode to perform the NDA characterization of a containe...

  13. Experimental Test of Entropic Noise-Disturbance Uncertainty Relations for Spin-1/2 Measurements.

    Science.gov (United States)

    Sulyok, Georg; Sponar, Stephan; Demirel, Bülent; Buscemi, Francesco; Hall, Michael J W; Ozawa, Masanao; Hasegawa, Yuji

    2015-07-17

    Information-theoretic definitions for noise and disturbance in quantum measurements were given in [Phys. Rev. Lett. 112, 050401 (2014)] and a state-independent noise-disturbance uncertainty relation was obtained. Here, we derive a tight noise-disturbance uncertainty relation for complementary qubit observables and carry out an experimental test. Successive projective measurements on the neutron's spin-1/2 system, together with a correction procedure which reduces the disturbance, are performed. Our experimental results saturate the tight noise-disturbance uncertainty relation for qubits when an optimal correction procedure is applied.

  14. Total uncertainty of low velocity thermal anemometers for measurement of indoor air movements

    DEFF Research Database (Denmark)

    Jørgensen, F.; Popiolek, Z.; Melikov, Arsen Krikor

    2004-01-01

    For a specific thermal anemometer with omnidirectional velocity sensor the expanded total uncertainty in measured mean velocity Û(Vmean) and the expanded total uncertainty in measured turbulence intensity Û(Tu) due to different error sources are estimated. The values are based on a previously...... developed mathematical model of the anemometer in combination with a large database of representative room flows measured with a 3-D Laser Doppler anemometer (LDA). A direct comparison between measurements with a thermal anemometer and a 3-D LDA in flows of varying velocity and turbulence intensity shows...... good agreement not only between the two instruments but also between the thermal anemometer and its mathematical model. The differences in the measurements performed with the two instruments are all well within the measurement uncertainty of both anemometers....

  15. Uncertainty Model for Total Solar Irradiance Estimation on Australian Rooftops

    Science.gov (United States)

    Al-Saadi, Hassan; Zivanovic, Rastko; Al-Sarawi, Said

    2017-11-01

    The installations of solar panels on Australian rooftops have been in rise for the last few years, especially in the urban areas. This motivates academic researchers, distribution network operators and engineers to accurately address the level of uncertainty resulting from grid-connected solar panels. The main source of uncertainty is the intermittent nature of radiation, therefore, this paper presents a new model to estimate the total radiation incident on a tilted solar panel. Where a probability distribution factorizes clearness index, the model is driven upon clearness index with special attention being paid for Australia with the utilization of best-fit-correlation for diffuse fraction. The assessment of the model validity is achieved with the adoption of four goodness-of-fit techniques. In addition, the Quasi Monte Carlo and sparse grid methods are used as sampling and uncertainty computation tools, respectively. High resolution data resolution of solar irradiations for Adelaide city were used for this assessment, with an outcome indicating a satisfactory agreement between actual data variation and model.

  16. Uncertainties of predictions from parton distributions 1, experimental errors

    CERN Document Server

    Martin, A D; Stirling, William James; Thorne, R S; CERN. Geneva

    2003-01-01

    We determine the uncertainties on observables arising from the errors on the experimental data that are fitted in the global MRST2001 parton analysis. By diagonalizing the error matrix we produce sets of partons suitable for use within the framework of linear propagation of errors, which is the most convenient method for calculating the uncertainties. Despite the potential limitations of this approach we find that it can be made to work well in practice. This is confirmed by our alternative approach of using the more rigorous Lagrange multiplier method to determine the errors on physical quantities directly. As particular examples we determine the uncertainties on the predictions of the charged-current deep-inelastic structure functions, on the cross-sections for W production and for Higgs boson production via gluon--gluon fusion at the Tevatron and the LHC, on the ratio of W-minus to W-plus production at the LHC and on the moments of the non-singlet quark distributions. We discuss the corresponding uncertain...

  17. MUP, CEC-DES, STRADE. Codes for uncertainty propagation, experimental design and stratified random sampling techniques

    International Nuclear Information System (INIS)

    Amendola, A.; Astolfi, M.; Lisanti, B.

    1983-01-01

    The report describes the how-to-use of the codes: MUP (Monte Carlo Uncertainty Propagation) for uncertainty analysis by Monte Carlo simulation, including correlation analysis, extreme value identification and study of selected ranges of the variable space; CEC-DES (Central Composite Design) for building experimental matrices according to the requirements of Central Composite and Factorial Experimental Designs; and, STRADE (Stratified Random Design) for experimental designs based on the Latin Hypercube Sampling Techniques. Application fields, of the codes are probabilistic risk assessment, experimental design, sensitivity analysis and system identification problems

  18. Code development of total sensitivity and uncertainty analysis for reactor physics calculations

    International Nuclear Information System (INIS)

    Wan, C.; Cao, L.; Wu, H.; Zu, T.; Shen, W.

    2015-01-01

    Sensitivity and uncertainty analysis are essential parts for reactor system to perform risk and policy analysis. In this study, total sensitivity and corresponding uncertainty analysis for responses of neutronics calculations have been accomplished and developed the S&U analysis code named UNICORN. The UNICORN code can consider the implicit effects of multigroup cross sections on the responses. The UNICORN code has been applied to typical pin-cell case in this paper, and can be proved correct by comparison the results with those of the TSUNAMI-1D code. (author)

  19. Code development of total sensitivity and uncertainty analysis for reactor physics calculations

    Energy Technology Data Exchange (ETDEWEB)

    Wan, C.; Cao, L.; Wu, H.; Zu, T., E-mail: chenghuiwan@stu.xjtu.edu.cn, E-mail: caolz@mail.xjtu.edu.cn, E-mail: hongchun@mail.xjtu.edu.cn, E-mail: tiejun@mail.xjtu.edu.cn [Xi' an Jiaotong Univ., School of Nuclear Science and Technology, Xi' an (China); Shen, W., E-mail: Wei.Shen@cnsc-ccsn.gc.ca [Xi' an Jiaotong Univ., School of Nuclear Science and Technology, Xi' an (China); Canadian Nuclear Safety Commission, Ottawa, ON (Canada)

    2015-07-01

    Sensitivity and uncertainty analysis are essential parts for reactor system to perform risk and policy analysis. In this study, total sensitivity and corresponding uncertainty analysis for responses of neutronics calculations have been accomplished and developed the S&U analysis code named UNICORN. The UNICORN code can consider the implicit effects of multigroup cross sections on the responses. The UNICORN code has been applied to typical pin-cell case in this paper, and can be proved correct by comparison the results with those of the TSUNAMI-1D code. (author)

  20. Towards minimizing measurement uncertainty in total petroleum hydrocarbon determination by GC-FID

    Energy Technology Data Exchange (ETDEWEB)

    Saari, E.

    2009-07-01

    Despite tightened environmental legislation, spillages of petroleum products remain a serious problem worldwide. The environmental impacts of these spillages are always severe and reliable methods for the identification and quantitative determination of petroleum hydrocarbons in environmental samples are therefore needed. Great improvements in the definition and analysis of total petroleum hydrocarbons (TPH) were finally introduced by international organizations for standardization in 2004. This brought some coherence to the determination and, nowadays, most laboratories seem to employ ISO/DIS 16703:2004, ISO 9377-2:2000 and CEN prEN 14039:2004:E draft international standards for analysing TPH in soil. The implementation of these methods, however, usually fails because the reliability of petroleum hydrocarbon determination has proved to be poor.This thesis describes the assessment of measurement uncertainty for TPH determination in soil. Chemometric methods were used to both estimate the main uncertainty sources and identify the most significant factors affecting these uncertainty sources. The method used for the determinations was based on gas chromatography utilizing flame ionization detection (GC-FID).Chemometric methodology applied in estimating measurement uncertainty for TPH determination showed that the measurement uncertainty is in actual fact dominated by the analytical uncertainty. Within the specific concentration range studied, the analytical uncertainty accounted for as much as 68-80% of the measurement uncertainty. The robustness of the analytical method used for petroleum hydrocarbon determination was then studied in more detail. A two-level Plackett-Burman design and a D-optimal design were utilized to assess the main analytical uncertainty sources of the sample treatment and GC determination procedures. It was also found that the matrix-induced systematic error may also significantly reduce the reliability of petroleum hydrocarbon determination

  1. A study on the propagation of measurement uncertainties into the result on a turbine performance test

    International Nuclear Information System (INIS)

    Cho, Soo Yong; Park, Chan Woo

    2004-01-01

    Uncertainties generated from the individual measured variables have an influence on the uncertainty of the experimental result through a data reduction equation. In this study, a performance test of a single stage axial type turbine is conducted, and total-to-total efficiencies are measured at the various off-design points in the low pressure and cold state. Based on an experimental apparatus, a data reduction equation for turbine efficiency is formulated and six measured variables are selected. Codes are written to calculate the efficiency, the uncertainty of the efficiency, and the sensitivity of the efficiency uncertainty by each of the measured quantities. The influence of each measured variable on the experimental result is figured out. Results show that the largest Uncertainty Magnification Factor (UMF) value is obtained by the inlet total pressure among the six measured variables, and its value is always greater than one. The UMF values of the inlet total temperature, the torque, and the RPM are always one. The Uncertainty Percentage Contribution (UPC) of the RPM shows the lowest influence on the uncertainty of the turbine efficiency, but the UPC of the torque has the largest influence to the result among the measured variables. These results are applied to find the correct direction for meeting an uncertainty requirement of the experimental result in the planning or development phase of experiment, and also to offer ideas for preparing a measurement system in the planning phase

  2. Assessment of the uncertainty associated with systematic errors in digital instruments: an experimental study on offset errors

    International Nuclear Information System (INIS)

    Attivissimo, F; Giaquinto, N; Savino, M; Cataldo, A

    2012-01-01

    This paper deals with the assessment of the uncertainty due to systematic errors, particularly in A/D conversion-based instruments. The problem of defining and assessing systematic errors is briefly discussed, and the conceptual scheme of gauge repeatability and reproducibility is adopted. A practical example regarding the evaluation of the uncertainty caused by the systematic offset error is presented. The experimental results, obtained under various ambient conditions, show that modelling the variability of systematic errors is more problematic than suggested by the ISO 5725 norm. Additionally, the paper demonstrates the substantial difference between the type B uncertainty evaluation, obtained via the maximum entropy principle applied to manufacturer's specifications, and the type A (experimental) uncertainty evaluation, which reflects actually observable reality. Although it is reasonable to assume a uniform distribution of the offset error, experiments demonstrate that the distribution is not centred and that a correction must be applied. In such a context, this work motivates a more pragmatic and experimental approach to uncertainty, with respect to the directions of supplement 1 of GUM. (paper)

  3. Statistical approach for uncertainty quantification of experimental modal model parameters

    DEFF Research Database (Denmark)

    Luczak, M.; Peeters, B.; Kahsin, M.

    2014-01-01

    Composite materials are widely used in manufacture of aerospace and wind energy structural components. These load carrying structures are subjected to dynamic time-varying loading conditions. Robust structural dynamics identification procedure impose tight constraints on the quality of modal models...... represent different complexity levels ranging from coupon, through sub-component up to fully assembled aerospace and wind energy structural components made of composite materials. The proposed method is demonstrated on two application cases of a small and large wind turbine blade........ This paper aims at a systematic approach for uncertainty quantification of the parameters of the modal models estimated from experimentally obtained data. Statistical analysis of modal parameters is implemented to derive an assessment of the entire modal model uncertainty measure. Investigated structures...

  4. Instrument uncertainty predictions

    International Nuclear Information System (INIS)

    Coutts, D.A.

    1991-07-01

    The accuracy of measurements and correlations should normally be provided for most experimental activities. The uncertainty is a measure of the accuracy of a stated value or equation. The uncertainty term reflects a combination of instrument errors, modeling limitations, and phenomena understanding deficiencies. This report provides several methodologies to estimate an instrument's uncertainty when used in experimental work. Methods are shown to predict both the pretest and post-test uncertainty

  5. First Reprocessing of Southern Hemisphere ADditional OZonesondes Profile Records: 3. Uncertainty in Ozone Profile and Total Column

    Science.gov (United States)

    Witte, Jacquelyn C.; Thompson, Anne M.; Smit, Herman G. J.; Vömel, Holger; Posny, Françoise; Stübi, Rene

    2018-03-01

    Reprocessed ozonesonde data from eight SHADOZ (Southern Hemisphere ADditional OZonesondes) sites have been used to derive the first analysis of uncertainty estimates for both profile and total column ozone (TCO). The ozone uncertainty is a composite of the uncertainties of the individual terms in the ozone partial pressure (PO3) equation, those being the ozone sensor current, background current, internal pump temperature, pump efficiency factors, conversion efficiency, and flow rate. Overall, PO3 uncertainties (ΔPO3) are within 15% and peak around the tropopause (15 ± 3 km) where ozone is a minimum and ΔPO3 approaches the measured signal. The uncertainty in the background and sensor currents dominates the overall ΔPO3 in the troposphere including the tropopause region, while the uncertainties in the conversion efficiency and flow rate dominate in the stratosphere. Seasonally, ΔPO3 is generally a maximum in the March-May, with the exception of SHADOZ sites in Asia, for which the highest ΔPO3 occurs in September-February. As a first approach, we calculate sonde TCO uncertainty (ΔTCO) by integrating the profile ΔPO3 and adding the ozone residual uncertainty, derived from the McPeters and Labow (2012, doi:10.1029/2011JD017006) 1σ ozone mixing ratios. Overall, ΔTCO are within ±15 Dobson units (DU), representing 5-6% of the TCO. Total Ozone Mapping Spectrometer and Ozone Monitoring Instrument (TOMS and OMI) satellite overpasses are generally within the sonde ΔTCO. However, there is a discontinuity between TOMS v8.6 (1998 to September 2004) and OMI (October 2004-2016) TCO on the order of 10 DU that accounts for the significant 16 DU overall difference observed between sonde and TOMS. By comparison, the sonde-OMI absolute difference for the eight stations is only 4 DU.

  6. Experimental Realization of Popper's Experiment: Violation of Uncertainty Principle?

    Science.gov (United States)

    Kim, Yoon-Ho; Yu, Rong; Shih, Yanhua

    An entangled pair of photon 1 and 2 are emitted in opposite directions along the positive and negative x-axis. A narrow slit is placed in the path of photon 1 which provides precise knowledge about its position along the y-axis and because of the quantum entanglement this in turn provides precise knowledge of the position y of its twin, photon 2. Does photon 2 experience a greater uncertainty in its momentum, i.e., a greater Δpy, due to the precise knowledge of its position y? This is the historical thought experiment of Sir Karl Popper which was aimed to undermine the Copenhagen interpretation in favor of a realistic viewpoint of quantum mechanics. Thispaper reports an experimental realization of the Popper's experiment. One may not agree with Popper's position on quantum mechanics; however, it calls for a correct understanding and interpretation of the experimental results.

  7. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    Energy Technology Data Exchange (ETDEWEB)

    Turinsky, Paul J [North Carolina State Univ., Raleigh, NC (United States); Abdel-Khalik, Hany S [North Carolina State Univ., Raleigh, NC (United States); Stover, Tracy E [North Carolina State Univ., Raleigh, NC (United States)

    2011-03-01

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment

  8. A comparative experimental evaluation of uncertainty estimation methods for two-component PIV

    Science.gov (United States)

    Boomsma, Aaron; Bhattacharya, Sayantan; Troolin, Dan; Pothos, Stamatios; Vlachos, Pavlos

    2016-09-01

    Uncertainty quantification in planar particle image velocimetry (PIV) measurement is critical for proper assessment of the quality and significance of reported results. New uncertainty estimation methods have been recently introduced generating interest about their applicability and utility. The present study compares and contrasts current methods, across two separate experiments and three software packages in order to provide a diversified assessment of the methods. We evaluated the performance of four uncertainty estimation methods, primary peak ratio (PPR), mutual information (MI), image matching (IM) and correlation statistics (CS). The PPR method was implemented and tested in two processing codes, using in-house open source PIV processing software (PRANA, Purdue University) and Insight4G (TSI, Inc.). The MI method was evaluated in PRANA, as was the IM method. The CS method was evaluated using DaVis (LaVision, GmbH). Utilizing two PIV systems for high and low-resolution measurements and a laser doppler velocimetry (LDV) system, data were acquired in a total of three cases: a jet flow and a cylinder in cross flow at two Reynolds numbers. LDV measurements were used to establish a point validation against which the high-resolution PIV measurements were validated. Subsequently, the high-resolution PIV measurements were used as a reference against which the low-resolution PIV data were assessed for error and uncertainty. We compared error and uncertainty distributions, spatially varying RMS error and RMS uncertainty, and standard uncertainty coverages. We observed that qualitatively, each method responded to spatially varying error (i.e. higher error regions resulted in higher uncertainty predictions in that region). However, the PPR and MI methods demonstrated reduced uncertainty dynamic range response. In contrast, the IM and CS methods showed better response, but under-predicted the uncertainty ranges. The standard coverages (68% confidence interval) ranged from

  9. A comparative experimental evaluation of uncertainty estimation methods for two-component PIV

    International Nuclear Information System (INIS)

    Boomsma, Aaron; Troolin, Dan; Pothos, Stamatios; Bhattacharya, Sayantan; Vlachos, Pavlos

    2016-01-01

    Uncertainty quantification in planar particle image velocimetry (PIV) measurement is critical for proper assessment of the quality and significance of reported results. New uncertainty estimation methods have been recently introduced generating interest about their applicability and utility. The present study compares and contrasts current methods, across two separate experiments and three software packages in order to provide a diversified assessment of the methods. We evaluated the performance of four uncertainty estimation methods, primary peak ratio (PPR), mutual information (MI), image matching (IM) and correlation statistics (CS). The PPR method was implemented and tested in two processing codes, using in-house open source PIV processing software (PRANA, Purdue University) and Insight4G (TSI, Inc.). The MI method was evaluated in PRANA, as was the IM method. The CS method was evaluated using DaVis (LaVision, GmbH). Utilizing two PIV systems for high and low-resolution measurements and a laser doppler velocimetry (LDV) system, data were acquired in a total of three cases: a jet flow and a cylinder in cross flow at two Reynolds numbers. LDV measurements were used to establish a point validation against which the high-resolution PIV measurements were validated. Subsequently, the high-resolution PIV measurements were used as a reference against which the low-resolution PIV data were assessed for error and uncertainty. We compared error and uncertainty distributions, spatially varying RMS error and RMS uncertainty, and standard uncertainty coverages. We observed that qualitatively, each method responded to spatially varying error (i.e. higher error regions resulted in higher uncertainty predictions in that region). However, the PPR and MI methods demonstrated reduced uncertainty dynamic range response. In contrast, the IM and CS methods showed better response, but under-predicted the uncertainty ranges. The standard coverages (68% confidence interval) ranged from

  10. Pu239 Cross-Section Variations Based on Experimental Uncertainties and Covariances

    Energy Technology Data Exchange (ETDEWEB)

    Sigeti, David Edward [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Brian J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, D. Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-18

    Algorithms and software have been developed for producing variations in plutonium-239 neutron cross sections based on experimental uncertainties and covariances. The varied cross-section sets may be produced as random samples from the multi-variate normal distribution defined by an experimental mean vector and covariance matrix, or they may be produced as Latin-Hypercube/Orthogonal-Array samples (based on the same means and covariances) for use in parametrized studies. The variations obey two classes of constraints that are obligatory for cross-section sets and which put related constraints on the mean vector and covariance matrix that detemine the sampling. Because the experimental means and covariances do not obey some of these constraints to sufficient precision, imposing the constraints requires modifying the experimental mean vector and covariance matrix. Modification is done with an algorithm based on linear algebra that minimizes changes to the means and covariances while insuring that the operations that impose the different constraints do not conflict with each other.

  11. On the relationship between micro and macro correlations in nuclear measurement uncertainties

    International Nuclear Information System (INIS)

    Smith, D.L.

    1987-01-01

    Consideration is given to the propagation of micro correlations between the component experimental errors (corresponding to diverse attributes of the measurement process) through to the macro correlations between the total errors in the final derived experimental values. Whenever certain micro correlations cannot be precisely specified, the macro correlations must also be uncertain. However, on the basis of fundamental principles from mathematical statistics, it is shown that these uncertainties in the macro correlations can be substantially smaller than the individual uncertainties for specific micro correlations, provided that the number of distinct attributes contributing to the total experimental error is reasonably large. Furthermore, the resulting macro correlations are shown to be approximately normally distributed regardless of teh distributions assumed for the micro correlations. Examples are provided to demonstrate these concepts and to illustrate their relevance to experimental nuclear research. (orig.)

  12. Plutonium Finishing Plant (PFP) Generalized Geometry Holdup Calculations and Total Measurement Uncertainty

    International Nuclear Information System (INIS)

    Keele, B.D.

    2005-01-01

    A collimated portable gamma-ray detector will be used to quantify the plutonium content of items that can be approximated as a point, line, or area geometry with respect to the detector. These items can include ducts, piping, glove boxes, isolated equipment inside of gloveboxes, and HEPA filters. The Generalized Geometry Holdup (GGH) model is used for the reduction of counting data. This document specifies the calculations to reduce counting data into contained plutonium and the associated total measurement uncertainty.

  13. Estimation of uncertainty of a reference material for proficiency testing for the determination of total mercury in fish in nature

    International Nuclear Information System (INIS)

    Santana, L V; Sarkis, J E S; Ulrich, J C; Hortellani, M A

    2015-01-01

    We provide an uncertainty estimates for homogeneity and stability studies of reference material used in proficiency test for determination of total mercury in fish fresh muscle tissue. Stability was estimated by linear regression and homogeneity by ANOVA. The results indicate that the reference material is both homogeneous and chemically stable over the short term. Total mercury concentration of the muscle tissue, with expanded uncertainty, was 0.294 ± 0.089 μg g −1

  14. Experimental Active Vibration Control in Truss Structures Considering Uncertainties in System Parameters

    Directory of Open Access Journals (Sweden)

    Douglas Domingues Bueno

    2008-01-01

    Full Text Available This paper deals with the study of algorithms for robust active vibration control in flexible structures considering uncertainties in system parameters. It became an area of enormous interest, mainly due to the countless demands of optimal performance in mechanical systems as aircraft, aerospace, and automotive structures. An important and difficult problem for designing active vibration control is to get a representative dynamic model. Generally, this model can be obtained using finite element method (FEM or an identification method using experimental data. Actuators and sensors may affect the dynamics properties of the structure, for instance, electromechanical coupling of piezoelectric material must be considered in FEM formulation for flexible and lightly damping structure. The nonlinearities and uncertainties involved in these structures make it a difficult task, mainly for complex structures as spatial truss structures. On the other hand, by using an identification method, it is possible to obtain the dynamic model represented through a state space realization considering this coupling. This paper proposes an experimental methodology for vibration control in a 3D truss structure using PZT wafer stacks and a robust control algorithm solved by linear matrix inequalities.

  15. Uncertainty of forest carbon stock changes. Implications to the total uncertainty of GHG inventory of Finland

    International Nuclear Information System (INIS)

    Monni, S.; Savolainen, I.; Peltoniemi, M.; Lehtonen, A.; Makipaa, R.; Palosuo, T.

    2007-01-01

    Uncertainty analysis facilitates identification of the most important categories affecting greenhouse gas (GHG) inventory uncertainty and helps in prioritisation of the efforts needed for development of the inventory. This paper presents an uncertainty analysis of GHG emissions of all Kyoto sectors and gases for Finland consolidated with estimates of emissions/removals from LULUCF categories. In Finland, net GHG emissions in 2003 were around 69 Tg (±15 Tg) CO2 equivalents. The uncertainties in forest carbon sink estimates in 2003 were larger than in most other emission categories, but of the same order of magnitude as in carbon stock change estimates in other land use, land-use change and forestry (LULUCF) categories, and in N2O emissions from agricultural soils. Uncertainties in sink estimates of 1990 were lower, due to better availability of data. Results of this study indicate that inclusion of the forest carbon sink to GHG inventories reported to the UNFCCC increases uncertainties in net emissions notably. However, the decrease in precision is accompanied by an increase in the accuracy of the overall net GHG emissions due to improved completeness of the inventory. The results of this study can be utilised when planning future GHG mitigation protocols and emission trading schemes and when analysing environmental benefits of climate conventions

  16. Uncertainty Quantification in Experimental Structural Dynamics Identification of Composite Material Structures

    DEFF Research Database (Denmark)

    Luczak, Marcin; Peeters, Bart; Kahsin, Maciej

    2014-01-01

    for uncertainty evaluation in experimentally estimated models. Investigated structures are plates, fuselage panels and helicopter main rotor blades as they represent different complexity levels ranging from coupon, through sub-component up to fully assembled structures made of composite materials. To evaluate......Aerospace and wind energy structures are extensively using components made of composite materials. Since these structures are subjected to dynamic environments with time-varying loading conditions, it is important to model their dynamic behavior and validate these models by means of vibration...

  17. Treatment of uncertainties in atmospheric chemical systems: A combined modeling and experimental approach

    Science.gov (United States)

    Pun, Betty Kong-Ling

    1998-12-01

    Uncertainty is endemic in modeling. This thesis is a two- phase program to understand the uncertainties in urban air pollution model predictions and in field data used to validate them. Part I demonstrates how to improve atmospheric models by analyzing the uncertainties in these models and using the results to guide new experimentation endeavors. Part II presents an experiment designed to characterize atmospheric fluctuations, which have significant implications towards the model validation process. A systematic study was undertaken to investigate the effects of uncertainties in the SAPRC mechanism for gas- phase chemistry in polluted atmospheres. The uncertainties of more than 500 parameters were compiled, including reaction rate constants, product coefficients, organic composition, and initial conditions. Uncertainty propagation using the Deterministic Equivalent Modeling Method (DEMM) revealed that the uncertainties in ozone predictions can be up to 45% based on these parametric uncertainties. The key parameters found to dominate the uncertainties of the predictions include photolysis rates of NO2, O3, and formaldehyde; the rate constant for nitric acid formation; and initial amounts of NOx and VOC. Similar uncertainty analysis procedures applied to two other mechanisms used in regional air quality models led to the conclusion that in the presence of parametric uncertainties, the mechanisms cannot be discriminated. Research efforts should focus on reducing parametric uncertainties in photolysis rates, reaction rate constants, and source terms. A new tunable diode laser (TDL) infrared spectrometer was designed and constructed to measure multiple pollutants simultaneously in the same ambient air parcels. The sensitivities of the one hertz measurements were 2 ppb for ozone, 1 ppb for NO, and 0.5 ppb for NO2. Meteorological data were also collected for wind, temperature, and UV intensity. The field data showed clear correlations between ozone, NO, and NO2 in the one

  18. Propagation of experimental uncertainties using the Lipari-Szabo model-free analysis of protein dynamics

    International Nuclear Information System (INIS)

    Jin Danqing; Andrec, Michael; Montelione, Gaetano T.; Levy, Ronald M.

    1998-01-01

    In this paper we make use of the graphical procedure previously described [Jin, D. et al. (1997) J. Am. Chem. Soc., 119, 6923-6924] to analyze NMR relaxation data using the Lipari-Szabo model-free formalism. The graphical approach is advantageous in that it allows the direct visualization of the experimental uncertainties in the motional parameter space. Some general 'rules' describing the relationship between the precision of the relaxation measurements and the precision of the model-free parameters and how this relationship changes with the overall tumbling time (τm) are summarized. The effect of the precision in the relaxation measurements on the detection of internal motions not close to the extreme narrowing limit is analyzed. We also show that multiple timescale internal motions may be obscured by experimental uncertainty, and that the collection of relaxation data at very high field strength can improve the ability to detect such deviations from the simple Lipari-Szabo model

  19. Application of bias factor method with use of virtual experimental value to prediction uncertainty reduction in void reactivity worth of breeding light water reactor

    International Nuclear Information System (INIS)

    Kugo, Teruhiko; Mori, Takamasa; Kojima, Kensuke; Takeda, Toshikazu

    2007-01-01

    We have carried out the critical experiments for the MOX fueled tight lattice LWR cores using FCA facility and constructed the XXII-1 series cores. Utilizing the critical experiments carried out at FCA, we have evaluated the reduction of prediction uncertainty in the coolant void reactivity worth of the breeding LWR core based on the bias factor method with focusing on the prediction uncertainty due to cross section errors. In the present study, we have introduced a concept of a virtual experimental value into the conventional bias factor method to overcome a problem caused by the conventional bias factor method in which the prediction uncertainty increases in the case that the experimental core has the opposite reactivity worth and the consequent opposite sensitivity coefficients to the real core. To extend the applicability of the bias factor method, we have adopted an exponentiated experimental value as the virtual experimental value and formulated the prediction uncertainty reduction by the use of the bias factor method extended by the concept of the virtual experimental value. From the numerical evaluation, it has been shown that the prediction uncertainty due to cross section errors has been reduced by the use of the concept of the virtual experimental value. It is concluded that the introduction of virtual experimental value can effectively utilize experimental data and extend applicability of the bias factor method. (author)

  20. Quantification of variability and uncertainty in lawn and garden equipment NOx and total hydrocarbon emission factors.

    Science.gov (United States)

    Frey, H Christopher; Bammi, Sachin

    2002-04-01

    Variability refers to real differences in emissions among multiple emission sources at any given time or over time for any individual emission source. Variability in emissions can be attributed to variation in fuel or feedstock composition, ambient temperature, design, maintenance, or operation. Uncertainty refers to lack of knowledge regarding the true value of emissions. Sources of uncertainty include small sample sizes, bias or imprecision in measurements, nonrepresentativeness, or lack of data. Quantitative methods for characterizing both variability and uncertainty are demonstrated and applied to case studies of emission factors for lawn and garden (L&G) equipment engines. Variability was quantified using empirical and parametric distributions. Bootstrap simulation was used to characterize confidence intervals for the fitted distributions. The 95% confidence intervals for the mean grams per brake horsepower/hour (g/hp-hr) emission factors for two-stroke engine total hydrocarbon (THC) and NOx emissions were from -30 to +41% and from -45 to +75%, respectively. The confidence intervals for four-stroke engines were from -33 to +46% for THCs and from -27 to +35% for NOx. These quantitative measures of uncertainty convey information regarding the quality of the emission factors and serve as a basis for calculation of uncertainty in emission inventories (EIs).

  1. Procedure for statistical analysis of one-parameter discrepant experimental data

    International Nuclear Information System (INIS)

    Badikov, Sergey A.; Chechev, Valery P.

    2012-01-01

    A new, Mandel–Paule-type procedure for statistical processing of one-parameter discrepant experimental data is described. The procedure enables one to estimate a contribution of unrecognized experimental errors into the total experimental uncertainty as well as to include it in analysis. A definition of discrepant experimental data for an arbitrary number of measurements is introduced as an accompanying result. In the case of negligible unrecognized experimental errors, the procedure simply reduces to the calculation of the weighted average and its internal uncertainty. The procedure was applied to the statistical analysis of half-life experimental data; Mean half-lives for 20 actinides were calculated and results were compared to the ENSDF and DDEP evaluations. On the whole, the calculated half-lives are consistent with the ENSDF and DDEP evaluations. However, the uncertainties calculated in this work essentially exceed the ENSDF and DDEP evaluations for discrepant experimental data. This effect can be explained by adequately taking into account unrecognized experimental errors. - Highlights: ► A new statistical procedure for processing one-parametric discrepant experimental data has been presented. ► Procedure estimates a contribution of unrecognized errors in the total experimental uncertainty. ► Procedure was applied for processing half-life discrepant experimental data. ► Results of the calculations are compared to the ENSDF and DDEP evaluations.

  2. Effects of Uncertainty and Spatial Variability on Seepage into Drifts in the Yucca Mountain Total system Performance Assessment Model

    International Nuclear Information System (INIS)

    Kalinich, D. A.; Wilson, M. L.

    2001-01-01

    Seepage into the repository drifts is an important factor in total-system performance. Uncertainty and spatial variability are considered in the seepage calculations. The base-case results show 13.6% of the waste packages (WPs) have seepage. For 5th percentile uncertainty, 4.5% of the WPs have seepage and the seepage flow decreased by a factor of 2. For 95th percentile uncertainty, 21.5% of the WPs have seepage and the seepage flow increased by a factor of 2. Ignoring spatial variability resulted in seepage on 100% of the WPs, with a factor of 3 increase in the seepage flow

  3. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  4. Uncertainty analysis of a nondestructive radioassay system for transuranic waste

    International Nuclear Information System (INIS)

    Harker, Y.D.; Blackwood, L.G.; Meachum, T.R.; Yoon, W.Y.

    1996-01-01

    Radioassay of transuranic waste in 207 liter drums currently stored at the Idaho National Engineering Laboratory is achieved using a Passive Active Neutron (PAN) nondestructive assay system. In order to meet data quality assurance requirements for shipping and eventual permanent storage of these drums at the Waste Isolation Pilot Plant in Carlsbad, New Mexico, the total uncertainty of the PAN system measurements must be assessed. In particular, the uncertainty calculations are required to include the effects of variations in waste matrix parameters and related variables on the final measurement results. Because of the complexities involved in introducing waste matrix parameter effects into the uncertainty calculations, standard methods of analysis (e.g., experimentation followed by propagation of errors) could not be implemented. Instead, a modified statistical sampling and verification approach was developed. In this modified approach the total performance of the PAN system is simulated using computer models of the assay system and the resultant output is compared with the known input to assess the total uncertainty. This paper describes the simulation process and illustrates its application to waste comprised of weapons grade plutonium-contaminated graphite molds

  5. NIST ThermoData Engine: Extension to Solvent Design and Propagation of Uncertainties for Process Simulation

    DEFF Research Database (Denmark)

    Diky, Vladimir; Chirico, Robert D.; Muzny, Chris

    ThermoData Engine (TDE, NIST Standard Reference Databases 103a and 103b) is the first product that implements the concept of Dynamic Data Evaluation in the fields of thermophysics and thermochemistry, which includes maintaining the comprehensive and up-to-date database of experimentally measured ...... uncertainties, curve deviations, and inadequacies of the models. Uncertainty analysis shows relative contributions to the total uncertainty from each component and pair of components....

  6. Evaluation of the 238U neutron total cross section

    International Nuclear Information System (INIS)

    Smith, A.; Poenitz, W.P.; Howerton, R.J.

    1982-12-01

    Experimental energy-averaged neutron total cross sections of 238 U were evaluated from 0.044 to 20.0 MeV using regorous numerical methods. The evaluated results are presented together with the associated uncertainties and correlation matrix. They indicate that this energy-averaged neutron total cross section is known to better than 1% over wide energy regions. There are somwewhat larger uncertainties at low energies (e.g., less than or equal to 0.2 MeV), near 8 MeV and above 15 MeV. The present evaluation is compard with values given in ENDF/B-V

  7. Justification for recommended uncertainties

    International Nuclear Information System (INIS)

    Pronyaev, V.G.; Badikov, S.A.; Carlson, A.D.

    2007-01-01

    The uncertainties obtained in an earlier standards evaluation were considered to be unrealistically low by experts of the US Cross Section Evaluation Working Group (CSEWG). Therefore, the CSEWG Standards Subcommittee replaced the covariance matrices of evaluated uncertainties by expanded percentage errors that were assigned to the data over wide energy groups. There are a number of reasons that might lead to low uncertainties of the evaluated data: Underestimation of the correlations existing between the results of different measurements; The presence of unrecognized systematic uncertainties in the experimental data can lead to biases in the evaluated data as well as to underestimations of the resulting uncertainties; Uncertainties for correlated data cannot only be characterized by percentage uncertainties or variances. Covariances between evaluated value at 0.2 MeV and other points obtained in model (RAC R matrix and PADE2 analytical expansion) and non-model (GMA) fits of the 6 Li(n,t) TEST1 data and the correlation coefficients are presented and covariances between the evaluated value at 0.045 MeV and other points (along the line or column of the matrix) as obtained in EDA and RAC R matrix fits of the data available for reactions that pass through the formation of the 7 Li system are discussed. The GMA fit with the GMA database is shown for comparison. The following diagrams are discussed: Percentage uncertainties of the evaluated cross section for the 6 Li(n,t) reaction and the for the 235 U(n,f) reaction; estimation given by CSEWG experts; GMA result with full GMA database, including experimental data for the 6 Li(n,t), 6 Li(n,n) and 6 Li(n,total) reactions; uncertainties in the GMA combined fit for the standards; EDA and RAC R matrix results, respectively. Uncertainties of absolute and 252 Cf fission spectrum averaged cross section measurements, and deviations between measured and evaluated values for 235 U(n,f) cross-sections in the neutron energy range 1

  8. Verification of uncertainty budgets

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Madsen, B.S.

    2005-01-01

    , and therefore it is essential that the applicability of the overall uncertainty budget to actual measurement results be verified on the basis of current experimental data. This should be carried out by replicate analysis of samples taken in accordance with the definition of the measurand, but representing...... the full range of matrices and concentrations for which the budget is assumed to be valid. In this way the assumptions made in the uncertainty budget can be experimentally verified, both as regards sources of variability that are assumed negligible, and dominant uncertainty components. Agreement between...

  9. Uncertainty in relative energy resolution measurements

    International Nuclear Information System (INIS)

    Volkovitsky, P.; Yen, J.; Cumberland, L.

    2007-01-01

    We suggest a new method for the determination of the detector relative energy resolution and its uncertainty based on spline approximation of experimental spectra and a statistical bootstrapping procedure. The proposed method is applied to the spectra obtained with NaI(Tl) scintillating detectors and 137 Cs sources. The spectrum histogram with background subtracted channel-by-channel is modeled by cubic spline approximation. The relative energy resolution (which is also known as pulse height resolution and energy resolution), defined as the full-width at half-maximum (FWHM) divided by the value of peak centroid, is calculated using the intercepts of the spline curve with the line of the half peak height. The value of the peak height is determined as the point where the value of the derivative goes to zero. The residuals, which are normalized over the square root of counts in a given bin (y-coordinate), obey the standard Gaussian distribution. The values of these residuals are randomly re-assigned to a different set of y-coordinates where a new 'pseudo-experimental' data set is obtained after 'de-normalization' of the old values. For this new data set a new spline approximation is found and the whole procedure is repeated several hundred times, until the standard deviation of relative energy resolution becomes stabilized. The standard deviation of relative energy resolutions calculated for each 'pseudo-experimental' data set (bootstrap uncertainty) is considered to be an estimate for relative energy resolution uncertainty. It is also shown that the relative bootstrap uncertainty is proportional to, and generally only two to three times bigger than, 1/√(N tot ), which is the relative statistical count uncertainty (N tot is the total number of counts under the peak). The newly suggested method is also applicable to other radiation and particle detectors, not only for relative energy resolution, but also for any of the other parameters in a measured spectrum, like

  10. Study of Monte Carlo approach to experimental uncertainty propagation with MSTW 2008 PDFs

    CERN Document Server

    Watt, G.

    2012-01-01

    We investigate the Monte Carlo approach to propagation of experimental uncertainties within the context of the established 'MSTW 2008' global analysis of parton distribution functions (PDFs) of the proton at next-to-leading order in the strong coupling. We show that the Monte Carlo approach using replicas of the original data gives PDF uncertainties in good agreement with the usual Hessian approach using the standard Delta(chi^2) = 1 criterion, then we explore potential parameterisation bias by increasing the number of free parameters, concluding that any parameterisation bias is likely to be small, with the exception of the valence-quark distributions at low momentum fractions x. We motivate the need for a larger tolerance, Delta(chi^2) > 1, by making fits to restricted data sets and idealised consistent or inconsistent pseudodata. Instead of using data replicas, we alternatively produce PDF sets randomly distributed according to the covariance matrix of fit parameters including appropriate tolerance values,...

  11. Uncertainty analysis and validation of environmental models. The empirically based uncertainty analysis

    International Nuclear Information System (INIS)

    Monte, Luigi; Hakanson, Lars; Bergstroem, Ulla; Brittain, John; Heling, Rudie

    1996-01-01

    The principles of Empirically Based Uncertainty Analysis (EBUA) are described. EBUA is based on the evaluation of 'performance indices' that express the level of agreement between the model and sets of empirical independent data collected in different experimental circumstances. Some of these indices may be used to evaluate the confidence limits of the model output. The method is based on the statistical analysis of the distribution of the index values and on the quantitative relationship of these values with the ratio 'experimental data/model output'. Some performance indices are described in the present paper. Among these, the so-called 'functional distance' (d) between the logarithm of model output and the logarithm of the experimental data, defined as d 2 =Σ n 1 ( ln M i - ln O i ) 2 /n where M i is the i-th experimental value, O i the corresponding model evaluation and n the number of the couplets 'experimental value, predicted value', is an important tool for the EBUA method. From the statistical distribution of this performance index, it is possible to infer the characteristics of the distribution of the ratio 'experimental data/model output' and, consequently to evaluate the confidence limits for the model predictions. This method was applied to calculate the uncertainty level of a model developed to predict the migration of radiocaesium in lacustrine systems. Unfortunately, performance indices are affected by the uncertainty of the experimental data used in validation. Indeed, measurement results of environmental levels of contamination are generally associated with large uncertainty due to the measurement and sampling techniques and to the large variability in space and time of the measured quantities. It is demonstrated that this non-desired effect, in some circumstances, may be corrected by means of simple formulae

  12. Iso-uncertainty control in an experimental fluoroscopy system

    International Nuclear Information System (INIS)

    Siddique, S.; Fiume, E.; Jaffray, D. A.

    2014-01-01

    Purpose: X-ray fluoroscopy remains an important imaging modality in a number of image-guided procedures due to its real-time nature and excellent spatial detail. However, the radiation dose delivered raises concerns about its use particularly in lengthy treatment procedures (>0.5 h). The authors have previously presented an algorithm that employs feedback of geometric uncertainty to control dose while maintaining a desired targeting uncertainty during fluoroscopic tracking of fiducials. The method was tested using simulations of motion against controlled noise fields. In this paper, the authors embody the previously reported method in a physical prototype and present changes to the controller required to function in a practical setting. Methods: The metric for feedback used in this study is based on the trace of the covariance of the state of the system, tr(C). The state is defined here as the 2D location of a fiducial on a plane parallel to the detector. A relationship between this metric and the tube current is first developed empirically. This relationship is extended to create a manifold that incorporates a latent variable representing the estimated background attenuation. The manifold is then used within the controller to dynamically adjust the tube current and maintain a specified targeting uncertainty. To evaluate the performance of the proposed method, an acrylic sphere (1.6 mm in diameter) was tracked at tube currents ranging from 0.5 to 0.9 mA (0.033 s) at a fixed energy of 80 kVp. The images were acquired on a Varian Paxscan 4030A (2048 × 1536 pixels, ∼100 cm source-to-axis distance, ∼160 cm source-to-detector distance). The sphere was tracked using a particle filter under two background conditions: (1) uniform sheets of acrylic and (2) an acrylic wedge. The measured tr(C) was used in conjunction with a learned manifold to modulate the tube current in order to maintain a specified uncertainty as the sphere traversed regions of varying thickness

  13. Total Monte-Carlo method applied to the assessment of uncertainties in a reactivity-initiated accident

    Energy Technology Data Exchange (ETDEWEB)

    Cruz, D.F. da; Rochman, D.; Koning, A.J. [Nuclear Research and Consultancy Group NRG, Petten (Netherlands)

    2014-07-01

    The Total Monte-Carlo (TMC) method has been applied extensively since 2008 to propagate the uncertainties in nuclear data for reactor parameters and fuel inventory, and for several types of advanced nuclear systems. The analyses have been performed considering different levels of complexity, ranging from a single fuel rod to a full 3-D reactor core at steady-state. The current work applies the TMC method for a full 3-D pressurized water reactor core model under steady-state and transient conditions, considering thermal-hydraulic feedback. As a transient scenario the study focused on a reactivity-initiated accident, namely a control rod ejection accident initiated by a mechanical failure of the control rod drive mechanism. The uncertainties on the main reactor parameters due to variations in nuclear data for the isotopes {sup 235},{sup 238}U, {sup 239}Pu and thermal scattering data for {sup 1}H in water were quantified. (author)

  14. Experimental data bases useful for quantification of model uncertainties in best estimate codes

    International Nuclear Information System (INIS)

    Wilson, G.E.; Katsma, K.R.; Jacobson, J.L.; Boodry, K.S.

    1988-01-01

    A data base is necessary for assessment of thermal hydraulic codes within the context of the new NRC ECCS Rule. Separate effect tests examine particular phenomena that may be used to develop and/or verify models and constitutive relationships in the code. Integral tests are used to demonstrate the capability of codes to model global characteristics and sequence of events for real or hypothetical transients. The nuclear industry has developed a large experimental data base of fundamental nuclear, thermal-hydraulic phenomena for code validation. Given a particular scenario, and recognizing the scenario's important phenomena, selected information from this data base may be used to demonstrate applicability of a particular code to simulate the scenario and to determine code model uncertainties. LBLOCA experimental data bases useful to this objective are identified in this paper. 2 tabs

  15. Experimental comparison between total calibration factors and components calibration factors of reference dosemeters used in secondary standard laboratory dosemeters

    International Nuclear Information System (INIS)

    Silva, T.A. da.

    1981-06-01

    A quantitative comparison of component calibration factors with the corresponding overall calibration factor was used to evaluate the adopted component calibration procedure in regard to parasitic elements. Judgement of significance is based upon the experimental uncertainty of a well established procedure for determination of the overall calibration factor. The experimental results obtained for different ionization chambers and different electrometers demonstrate that for one type of electrometer the parasitic elements have no influence on its sensitivity considering the experimental uncertainty of the calibration procedures. In this case the adopted procedure for determination of component calibration factors is considered to be equivalent to the procedure of determination of the overall calibration factor and thus might be used as a strong quality control measure in routine calibration. (Author) [pt

  16. Determination of time-dependent uncertainty of the total solar irradiance records from 1978 to present

    Directory of Open Access Journals (Sweden)

    Fröhlich Claus

    2016-01-01

    Full Text Available Aims. The existing records of total solar irradiance (TSI since 1978 differ not only in absolute values, but also show different trends. For the study of TSI variability these records need to be combined and three composites have been devised; however, the results depend on the choice of the records and the way they are combined. A new composite should be based on all existing records with an individual qualification. It is proposed to use a time-dependent uncertainty for weighting of the individual records. Methods. The determination of the time-dependent deviation of the TSI records is performed by comparison with the square root of the sunspot number (SSN. However, this correlation is only valid for timescales of the order of a year or more because TSI and SSN react quite differently to solar activity changes on shorter timescales. Hence the results concern only periods longer than the one-year-low-pass filter used in the analysis. Results. Besides the main objective to determine an investigator-independent uncertainty, the comparison of TSI with √SSN turns out to be a powerful tool for the study of the TSI long-term changes. The correlation of √SSN with TSI replicates very well the TSI minima, especially the very low value of the recent minimum. The results of the uncertainty determination confirm not only the need for adequate corrections for degradation, but also show that a rather detailed analysis is needed. The daily average of all TSI values available on that day, weighted with the correspondingly determined uncertainty, is used to construct a “new” composite, which, overall, compares well with the Physikalisch-Meteorologisches Observatorium Davos (PMOD composite. Finally, the TSI − √SSN comparison proves to be an important diagnostic tool not only for estimating uncertainties of observations, but also for a better understanding of the long-term variability of TSI.

  17. Improved profile fitting and quantification of uncertainty in experimental measurements of impurity transport coefficients using Gaussian process regression

    International Nuclear Information System (INIS)

    Chilenski, M.A.; Greenwald, M.; Howard, N.T.; White, A.E.; Rice, J.E.; Walk, J.R.; Marzouk, Y.

    2015-01-01

    The need to fit smooth temperature and density profiles to discrete observations is ubiquitous in plasma physics, but the prevailing techniques for this have many shortcomings that cast doubt on the statistical validity of the results. This issue is amplified in the context of validation of gyrokinetic transport models (Holland et al 2009 Phys. Plasmas 16 052301), where the strong sensitivity of the code outputs to input gradients means that inadequacies in the profile fitting technique can easily lead to an incorrect assessment of the degree of agreement with experimental measurements. In order to rectify the shortcomings of standard approaches to profile fitting, we have applied Gaussian process regression (GPR), a powerful non-parametric regression technique, to analyse an Alcator C-Mod L-mode discharge used for past gyrokinetic validation work (Howard et al 2012 Nucl. Fusion 52 063002). We show that the GPR techniques can reproduce the previous results while delivering more statistically rigorous fits and uncertainty estimates for both the value and the gradient of plasma profiles with an improved level of automation. We also discuss how the use of GPR can allow for dramatic increases in the rate of convergence of uncertainty propagation for any code that takes experimental profiles as inputs. The new GPR techniques for profile fitting and uncertainty propagation are quite useful and general, and we describe the steps to implementation in detail in this paper. These techniques have the potential to substantially improve the quality of uncertainty estimates on profile fits and the rate of convergence of uncertainty propagation, making them of great interest for wider use in fusion experiments and modelling efforts. (paper)

  18. Deterministic uncertainty analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1987-01-01

    Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig

  19. Creation of a long-term data record of total O3 - issues and challenges in prescribing the uncertainties

    Science.gov (United States)

    Haffner, D. P.; Bhartia, P. K.; Li, J. Y.

    2012-12-01

    With the launch of the BUV instrument on NASA's Nimbus-4 satellite in April 1970, ozone became one of the first atmospheric variables to be measured from space with high accuracy. By 1980, the quality of total column ozone measured from the TOMS instrument on the Nimbus-7 satellite had improved to the point that it started to be used to identify poorly calibrated instruments in the venerable Dobson ground-based network. Now we have a total ozone record spanning 42 years created by more than a dozen instruments. We will discuss the issues and challenges that we have faced in creating a consistent long-term record and in providing uncertainty estimates. This work is not yet finished. We are currently developing a new algorithm (Version 9) that will be used to reprocess the entire record. The main motivation for developing this algorithm is not so much to improve the quality of the data, which is quite high already, but to provide better estimates of uncertainties when errors are spatially and temporally correlated, and to develop better techniques to catch "Black Swan" events (BSE). These are events that occur infrequently but cause errors larger than expected by Gaussian probability distribution. For example, the eruption of El Chichón revealed that our ozone algorithm had unexpected sensitivity to volcanic SO2, and evidence of the ozone hole was initially interpreted as a problem with the TOMS instrument. We also provide mathematical operators that can be applied by sophisticated users to compute their own uncertainties for their particular applications. This is necessary because uncertainties change in complex ways when the data are smoothed or averaged. The modern data archival system should be designed to accommodate such operators and provide software for using them.

  20. Development of Property Models with Uncertainty Estimate for Process Design under Uncertainty

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Sarup, Bent; Abildskov, Jens

    more reliable predictions with a new and improved set of model parameters for GC (group contribution) based and CI (atom connectivity index) based models and to quantify the uncertainties in the estimated property values from a process design point-of-view. This includes: (i) parameter estimation using....... The comparison of model prediction uncertainties with reported range of measurement uncertainties is presented for the properties with related available data. The application of the developed methodology to quantify the effect of these uncertainties on the design of different unit operations (distillation column......, the developed methodology can be used to quantify the sensitivity of process design to uncertainties in property estimates; obtain rationally the risk/safety factors in process design; and identify additional experimentation needs in order to reduce most critical uncertainties....

  1. Stereo-particle image velocimetry uncertainty quantification

    International Nuclear Information System (INIS)

    Bhattacharya, Sayantan; Vlachos, Pavlos P; Charonko, John J

    2017-01-01

    Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric

  2. Report on the uncertainty methods study

    International Nuclear Information System (INIS)

    1998-06-01

    The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI, has compared five methods for calculating the uncertainty in the predictions of advanced 'best estimate' thermal-hydraulic codes: the Pisa method (based on extrapolation from integral experiments) and four methods identifying and combining input uncertainties. Three of these, the GRS, IPSN and ENUSA methods, use subjective probability distributions, and one, the AEAT method, performs a bounding analysis. Each method has been used to calculate the uncertainty in specified parameters for the LSTF SB-CL-18 5% cold leg small break LOCA experiment in the ROSA-IV Large Scale Test Facility (LSTF). The uncertainty analysis was conducted essentially blind and the participants did not use experimental measurements from the test as input apart from initial and boundary conditions. Participants calculated uncertainty ranges for experimental parameters including pressurizer pressure, primary circuit inventory and clad temperature (at a specified position) as functions of time

  3. Collaborative framework for PIV uncertainty quantification: the experimental database

    International Nuclear Information System (INIS)

    Neal, Douglas R; Sciacchitano, Andrea; Scarano, Fulvio; Smith, Barton L

    2015-01-01

    The uncertainty quantification of particle image velocimetry (PIV) measurements has recently become a topic of great interest as shown by the recent appearance of several different methods within the past few years. These approaches have different working principles, merits and limitations, which have been speculated upon in subsequent studies. This paper reports a unique experiment that has been performed specifically to test the efficacy of PIV uncertainty methods. The case of a rectangular jet, as previously studied by Timmins et al (2012) and Wilson and Smith (2013b), is used. The novel aspect of the experiment is simultaneous velocity measurements using two different time-resolved PIV systems and a hot-wire anemometry (HWA) system. The first PIV system, called the PIV measurement system (‘PIV-MS’), is intended for nominal measurements of which the uncertainty is to be evaluated. It is based on a single camera and features a dynamic velocity range (DVR) representative of typical PIV experiments. The second PIV system, called the ‘PIV-HDR’ (high dynamic range) system, features a significantly higher DVR obtained with a higher digital imaging resolution. The hot-wire is placed in close proximity to the PIV measurement domain. The three measurement systems were carefully set to simultaneously measure the flow velocity at the same time and location. The comparison between the PIV-HDR system and the HWA provides an estimate of the measurement precision of the reference velocity for evaluation of the instantaneous error in the measurement system. The discrepancy between the PIV-MS and the reference data provides the measurement error, which is later used to assess the different uncertainty quantification methods proposed in the literature. A detailed comparison of the uncertainty estimation methods based on the present datasets is presented in a second paper from Sciacchitano et al (2015). Furthermore, this database offers the potential to be used for

  4. Embracing uncertainty in applied ecology.

    Science.gov (United States)

    Milner-Gulland, E J; Shea, K

    2017-12-01

    Applied ecologists often face uncertainty that hinders effective decision-making.Common traps that may catch the unwary are: ignoring uncertainty, acknowledging uncertainty but ploughing on, focussing on trivial uncertainties, believing your models, and unclear objectives.We integrate research insights and examples from a wide range of applied ecological fields to illustrate advances that are generally underused, but could facilitate ecologists' ability to plan and execute research to support management.Recommended approaches to avoid uncertainty traps are: embracing models, using decision theory, using models more effectively, thinking experimentally, and being realistic about uncertainty. Synthesis and applications . Applied ecologists can become more effective at informing management by using approaches that explicitly take account of uncertainty.

  5. Some sources of the underestimation of evaluated cross section uncertainties

    International Nuclear Information System (INIS)

    Badikov, S.A.; Gai, E.V.

    2003-01-01

    The problem of the underestimation of evaluated cross-section uncertainties is addressed. Two basic sources of the underestimation of evaluated cross-section uncertainties - a) inconsistency between declared and observable experimental uncertainties and b) inadequacy between applied statistical models and processed experimental data - are considered. Both the sources of the underestimation are mainly a consequence of existence of the uncertainties unrecognized by experimenters. A model of a 'constant shift' is proposed for taking unrecognised experimental uncertainties into account. The model is applied for statistical analysis of the 238 U(n,f)/ 235 U(n,f) reaction cross-section ratio measurements. It is demonstrated that multiplication by sqrt(χ 2 ) as instrument for correction of underestimated evaluated cross-section uncertainties fails in case of correlated measurements. It is shown that arbitrary assignment of uncertainties and correlation in a simple least squares fit of two correlated measurements of unknown mean leads to physically incorrect evaluated results. (author)

  6. Nuclear Physical Uncertainties in Modeling X-Ray Bursts

    Science.gov (United States)

    Regis, Eric; Amthor, A. Matthew

    2017-09-01

    Type I x-ray bursts occur when a neutron star accretes material from the surface of another star in a compact binary star system. For certain accretion rates and material compositions, much of the nuclear material is burned in short, explosive bursts. Using a one-dimensional stellar model, Kepler, and a comprehensive nuclear reaction rate library, ReacLib, we have simulated chains of type I x-ray bursts. Unfortunately, there are large remaining uncertainties in the nuclear reaction rates involved, since many of the isotopes reacting are unstable and have not yet been studied experimentally. Some individual reactions, when varied within their estimated uncertainty, alter the light curves dramatically. This limits our ability to understand the structure of the neutron star. Previous studies have looked at the effects of individual reaction rate uncertainties. We have applied a Monte Carlo method ``-simultaneously varying a set of reaction rates'' -in order to probe the expected uncertainty in x-ray burst behaviour due to the total uncertainty in all nuclear reaction rates. Furthermore, we aim to discover any nonlinear effects due to the coupling between different reaction rates. Early results show clear non-linear effects. This research was made possible by NSF-DUE Grant 1317446, BUScholars Program.

  7. Pulsed total dose damage effect experimental study on EPROM

    International Nuclear Information System (INIS)

    Luo Yinhong; Yao Zhibin; Zhang Fengqi; Guo Hongxia; Zhang Keying; Wang Yuanming; He Baoping

    2011-01-01

    Nowadays, memory radiation effect study mainly focus on functionality measurement. Measurable parameters is few in china. According to the present situation, threshold voltage testing method was presented on floating gate EPROM memory. Experimental study of pulsed total dose effect on EPROM threshold voltage was carried out. Damage mechanism was analysed The experiment results showed that memory cell threshold voltage negative shift was caused by pulsed total dose, memory cell threshold voltage shift is basically coincident under steady bias supply and no bias supply. (authors)

  8. Projected uranium measurement uncertainties for the Gas Centrifuge Enrichment Plant

    International Nuclear Information System (INIS)

    Younkin, J.M.

    1979-02-01

    An analysis was made of the uncertainties associated with the measurements of the declared uranium streams in the Portsmouth Gas Centrifuge Enrichment Plant (GCEP). The total uncertainty for the GCEP is projected to be from 54 to 108 kg 235 U/year out of a measured total of 200,000 kg 235 U/year. The systematic component of uncertainty of the UF 6 streams is the largest and the dominant contributor to the total uncertainty. A possible scheme for reducing the total uncertainty is given

  9. Uncertainty analysis and flow measurements in an experimental mock-up of a molten salt reactor concept

    Energy Technology Data Exchange (ETDEWEB)

    Yamaji, Bogdan; Aszodi, Attila [Budapest University of Technology and Economics (Hungary). Inst. of Nuclear Techniques

    2016-09-15

    In the paper measurement results from the experimental modelling of a molten salt reactor concept will be presented along with detailed uncertainty analysis of the experimental system. Non-intrusive flow measurements are carried out on the scaled and segmented mock-up of a homogeneous, single region molten salt fast reactor concept. Uncertainty assessment of the particle image velocimetry (PIV) measurement system applied with the scaled and segmented model is presented in detail. The analysis covers the error sources of the measurement system (laser, recording camera, etc.) and the specific conditions (de-warping of measurement planes) originating in the geometry of the investigated domain. Effect of sample size in the ensemble averaged PIV measurements is discussed as well. An additional two-loop-operation mode is also presented and the analysis of the measurement results confirm that without enhancement nominal and other operation conditions will lead to strong unfavourable separation in the core flow. It implies that use of internal flow distribution structures will be necessary for the optimisation of the core coolant flow. Preliminary CFD calculations are presented to help the design of a perforated plate located above the inlet region. The purpose of the perforated plate is to reduce recirculation near the cylindrical wall and enhance the uniformity of the core flow distribution.

  10. Calibration Under Uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Trucano, Timothy Guy

    2005-03-01

    This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

  11. Determination of total arsenic in fish by hydride-generation atomic absorption spectrometry: method validation, traceability and uncertainty evaluation

    Science.gov (United States)

    Nugraha, W. C.; Elishian, C.; Ketrin, R.

    2017-03-01

    Fish containing arsenic compound is one of the important indicators of arsenic contamination in water monitoring. The high level of arsenic in fish is due to absorption through food chain and accumulated in their habitat. Hydride generation (HG) coupled with atomic absorption spectrometric (AAS) detection is one of the most popular techniques employed for arsenic determination in a variety of matrices including fish. This study aimed to develop a method for the determination of total arsenic in fish by HG-AAS. The method for sample preparation from American of Analytical Chemistry (AOAC) Method 999.10-2005 was adopted for acid digestion using microwave digestion system and AOAC Method 986.15 - 2005 for dry ashing. The method was developed and validated using Certified Reference Material DORM 3 Fish Protein for trace metals for ensuring the accuracy and the traceability of the results. The sources of uncertainty of the method were also evaluated. By using the method, it was found that the total arsenic concentration in the fish was 45.6 ± 1.22 mg.Kg-1 with a coverage factor of equal to 2 at 95% of confidence level. Evaluation of uncertainty was highly influenced by the calibration curve. This result was also traceable to International Standard System through analysis of Certified Reference Material DORM 3 with 97.5% of recovery. In summary, it showed that method of preparation and HG-AAS technique for total arsenic determination in fish were valid and reliable.

  12. Development and optimization of nuclear heating and gamma flux measurement techniques in experimental reactors: identification, mastery, treatment and reduction of uncertainties

    International Nuclear Information System (INIS)

    Amharrak, H.

    2012-01-01

    This thesis work focuses on the needs for qualification of neutron and photonics calculation schemes in the future Jules Horowitz technological Reactor (RJH) and Pressurized Water Reactors (PWR). It is necessary to establish reliable measurement results with well defined associated uncertainties, for qualification and/or validation. The objective of this thesis is to develop and to improve the nuclear heating measurement methods (especially gamma photons) in MINERVE and EOLE experimental reactors at CEA-Cadarache, using thermo-luminescent detectors (TLD), optically stimulated luminescence detectors (OSLD) and an ionization chamber. It is to identify, prioritize, treat and reduce the various sources of uncertainty and systematic bias associated with the measurement. In a previous study, where nuclear heating was estimated from an integrated radiation dose by TLD in MINERVE and EOLE reactors, it has been shown that dose calculation underestimated the experiment by 25% with a total uncertainty of 15% (2σ). This systematic bias observed has been largely attributed to a lack of nuclear data used to perform the calculations. Therefore, in this work a new series of experiments was set up in the MINERVE reactor to reduce the measurement uncertainties, and better understand the origins of the discrepancies with the modeling. These experiments were carried out in an aluminum or hafnium surrounding (in specifically designed boxes) using a new procedure and analysis methodology. In these experiments, the TLD are calibrated individually, the repeatability of the measurement is experimentally evaluated and the laws of TLD heat are optimized. These improvements are subsequently used for the measurement of nuclear heating in AMMON program (EOLE reactor), dedicated to the qualification of neutron and photonics calculation schemes in the RJH reactor. The measurements of the gamma emitted, with a delay (delayed gamma) after shutdown of the MINERVE reactor, were also carried out

  13. Organic and total mercury determination in sediments by cold vapor atomic absorption spectrometry: methodology validation and uncertainty measurements

    Directory of Open Access Journals (Sweden)

    Robson L. Franklin

    2012-01-01

    Full Text Available The purpose of the present study was to validate a method for organic Hg determination in sediment. The procedure for organic Hg was adapted from literature, where the organomercurial compounds were extracted with dichloromethane in acid medium and subsequent destruction of organic compounds by bromine chloride. Total Hg was performed according to 3051A USEPA methodology. Mercury quantification for both methodologies was then performed by CVAAS. Methodology validation was verified by analyzing certified reference materials for total Hg and methylmercury. The uncertainties for both methodologies were calculated. The quantification limit of 3.3 µg kg-1 was found for organic Hg by CVAAS.

  14. Total electron scattering cross section from pyridine molecules in the energy range 10-1000 eV

    Science.gov (United States)

    Dubuis, A. Traoré; Costa, F.; da Silva, F. Ferreira; Limão-Vieira, P.; Oller, J. C.; Blanco, F.; García, G.

    2018-05-01

    We report on experimental total electron scattering cross-section (TCS) from pyridine (C5H5N) for incident electron energies between 10 and 1000 eV, with experimental uncertainties within 5-10%, as measured with a double electrostatic analyser apparatus. The experimental results are compared with our theoretical calculations performed within the independent atom model complemented with a screening corrected additivity rule (IAM-SCAR) procedure which has been updated by including interference effects. A good level of agreement is found between both data sources within the experimental uncertainties. The present TCS results for electron impact energy under study contribute, together with other scattering data available in the literature, to achieve a consistent set of cross section data for modelling purposes.

  15. Passive active neutron radioassay measurement uncertainty for combustible and glass waste matrices

    International Nuclear Information System (INIS)

    Blackwood, L.G.; Harker, Y.D.; Meachum, T.R.; Yoon, Woo Y.

    1997-01-01

    Using a modified statistical sampling and verification approach, total uncertainty of INEL's Passive Active Neutron (PAN) radioassay system was evaluated for combustible and glass content codes. Waste structure and content of 100 randomly selected drums in each the waste categories were computer modeled based on review of real-time radiography video tapes. Specific quantities of Pu were added to the drum models according to an experimental design. These drum models were then submitted to the Monte Carlo Neutron Photon code processing and subsequent calculations to produce simulated PAN system measurements. The reported Pu masses from the simulation runs were compared with the corresponding input masses. Analysis of the measurement errors produced uncertainty estimates. This paper presents results of the uncertainty calculations and compares them to previous reported results obtained for graphite waste

  16. The Uncertainty of Measurement Results

    Energy Technology Data Exchange (ETDEWEB)

    Ambrus, A. [Hungarian Food Safety Office, Budapest (Hungary)

    2009-07-15

    Factors affecting the uncertainty of measurement are explained, basic statistical formulae given, and the theoretical concept explained in the context of pesticide formulation analysis. Practical guidance is provided on how to determine individual uncertainty components within an analytical procedure. An extended and comprehensive table containing the relevant mathematical/statistical expressions elucidates the relevant underlying principles. Appendix I provides a practical elaborated example on measurement uncertainty estimation, above all utilizing experimental repeatability and reproducibility laboratory data. (author)

  17. Experimental Research Examining how People can Cope with Uncertainty through Soft Haptic Sensations

    NARCIS (Netherlands)

    Van Horen, F.; Mussweiler, T.

    2015-01-01

    Human beings are constantly surrounded by uncertainty and change. The question arises how people cope with such uncertainty. To date, most research has focused on the cognitive strategies people adopt to deal with uncertainty. However, especially when uncertainty is due to unpredictable societal

  18. A new method of body habitus correction for total body potassium measurements

    International Nuclear Information System (INIS)

    O'Hehir, S; Green, S; Beddoe, A H

    2006-01-01

    This paper describes an accurate and time-efficient method for the determination of total body potassium via a combination of measurements in the Birmingham whole body counter and the use of the Monte Carlo n-particle (MCNP) simulation code. In developing this method, MCNP has also been used to derive values for some components of the total measurement uncertainty which are difficult to quantify experimentally. A method is proposed for MCNP-assessed body habitus corrections based on a simple generic anthropomorphic model, scaled for individual height and weight. The use of this model increases patient comfort by reducing the need for comprehensive anthropomorphic measurements. The analysis shows that the total uncertainty in potassium weight determination by this whole body counting methodology for water-filled phantoms with a known amount of potassium is 2.7% (SD). The uncertainty in the method of body habitus correction (applicable also to phantom-based methods) is 1.5% (SD). It is concluded that this new strategy provides a sufficiently accurate model for routine clinical use

  19. A new method of body habitus correction for total body potassium measurements

    Energy Technology Data Exchange (ETDEWEB)

    O' Hehir, S [University Hospital Birmingham Foundation NHS Trust, Birmingham (United Kingdom); Green, S [University Hospital Birmingham Foundation NHS Trust, Birmingham (United Kingdom); Beddoe, A H [University Hospital Birmingham Foundation NHS Trust, Birmingham (United Kingdom)

    2006-09-07

    This paper describes an accurate and time-efficient method for the determination of total body potassium via a combination of measurements in the Birmingham whole body counter and the use of the Monte Carlo n-particle (MCNP) simulation code. In developing this method, MCNP has also been used to derive values for some components of the total measurement uncertainty which are difficult to quantify experimentally. A method is proposed for MCNP-assessed body habitus corrections based on a simple generic anthropomorphic model, scaled for individual height and weight. The use of this model increases patient comfort by reducing the need for comprehensive anthropomorphic measurements. The analysis shows that the total uncertainty in potassium weight determination by this whole body counting methodology for water-filled phantoms with a known amount of potassium is 2.7% (SD). The uncertainty in the method of body habitus correction (applicable also to phantom-based methods) is 1.5% (SD). It is concluded that this new strategy provides a sufficiently accurate model for routine clinical use.

  20. Results from the Application of Uncertainty Methods in the CSNI Uncertainty Methods Study (UMS)

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    Within licensing procedures there is the incentive to replace the conservative requirements for code application by a - best estimate - concept supplemented by an uncertainty analysis to account for predictive uncertainties of code results. Methods have been developed to quantify these uncertainties. The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI, has compared five methods for calculating the uncertainty in the predictions of advanced -best estimate- thermal-hydraulic codes. Most of the methods identify and combine input uncertainties. The major differences between the predictions of the methods came from the choice of uncertain parameters and the quantification of the input uncertainties, i.e. the wideness of the uncertainty ranges. Therefore, suitable experimental and analytical information has to be selected to specify these uncertainty ranges or distributions. After the closure of the Uncertainty Method Study (UMS) and after the report was issued comparison calculations of experiment LSTF-SB-CL-18 were performed by University of Pisa using different versions of the RELAP 5 code. It turned out that the version used by two of the participants calculated a 170 K higher peak clad temperature compared with other versions using the same input deck. This may contribute to the differences of the upper limit of the uncertainty ranges.

  1. Summary of existing uncertainty methods

    International Nuclear Information System (INIS)

    Glaeser, Horst

    2013-01-01

    A summary of existing and most used uncertainty methods is presented, and the main features are compared. One of these methods is the order statistics method based on Wilks' formula. It is applied in safety research as well as in licensing. This method has been first proposed by GRS for use in deterministic safety analysis, and is now used by many organisations world-wide. Its advantage is that the number of potential uncertain input and output parameters is not limited to a small number. Such a limitation was necessary for the first demonstration of the Code Scaling Applicability Uncertainty Method (CSAU) by the United States Regulatory Commission (USNRC). They did not apply Wilks' formula in their statistical method propagating input uncertainties to obtain the uncertainty of a single output variable, like peak cladding temperature. A Phenomena Identification and Ranking Table (PIRT) was set up in order to limit the number of uncertain input parameters, and consequently, the number of calculations to be performed. Another purpose of such a PIRT process is to identify the most important physical phenomena which a computer code should be suitable to calculate. The validation of the code should be focused on the identified phenomena. Response surfaces are used in some applications replacing the computer code for performing a high number of calculations. The second well known uncertainty method is the Uncertainty Methodology Based on Accuracy Extrapolation (UMAE) and the follow-up method 'Code with the Capability of Internal Assessment of Uncertainty (CIAU)' developed by the University Pisa. Unlike the statistical approaches, the CIAU does compare experimental data with calculation results. It does not consider uncertain input parameters. Therefore, the CIAU is highly dependent on the experimental database. The accuracy gained from the comparison between experimental data and calculated results are extrapolated to obtain the uncertainty of the system code predictions

  2. Validation of Fuel Performance Uncertainty for RIA Safety Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Park, Nam-Gyu; Yoo, Jong-Sung; Jung, Yil-Sup [KEPCO Nuclear Fuel Co., Daejeon (Korea, Republic of)

    2016-10-15

    To achieve this the computer code performance has to be validated based on the experimental results. And for the uncertainty quantification, important uncertainty parameters need to be selected, and combined uncertainty has to be evaluated with an acceptable statistical treatment. And important uncertainty parameters to the rod performance such as fuel enthalpy, fission gas release, cladding hoop strain etc. were chosen through the rigorous sensitivity studies. And their validity has been assessed by utilizing the experimental results, which were tested in CABRI and NSRR. Analysis results revealed that several tested rods were not bounded within combined fuel performance uncertainty. Assessment of fuel performance with an extended fuel power uncertainty on tested rods in NSRR and CABRI has been done. Analysis results showed that several tested rods were not bounded within calculated fuel performance uncertainty. This implies that the currently considered uncertainty range of the parameters is not enough to cover the fuel performance sufficiently.

  3. Evaluation of the uncertainty in an EBT3 film dosimetry system utilizing net optical density.

    Science.gov (United States)

    Marroquin, Elsa Y León; Herrera González, José A; Camacho López, Miguel A; Barajas, José E Villarreal; García-Garduño, Olivia A

    2016-09-08

    Radiochromic film has become an important tool to verify dose distributions for intensity-modulated radiotherapy (IMRT) and quality assurance (QA) procedures. A new radiochromic film model, EBT3, has recently become available, whose composition and thickness of the sensitive layer are the same as those of previous EBT2 films. However, a matte polyester layer was added to EBT3 to prevent the formation of Newton's rings. Furthermore, the symmetrical design of EBT3 allows the user to eliminate side-orientation dependence. This film and the flatbed scanner, Epson Perfection V750, form a dosimetry system whose intrinsic characteristics were studied in this work. In addition, uncertainties associated with these intrinsic characteristics and the total uncertainty of the dosimetry system were determined. The analysis of the response of the radiochromic film (net optical density) and the fitting of the experimental data to a potential function yielded an uncertainty of 2.6%, 4.3%, and 4.1% for the red, green, and blue channels, respectively. In this work, the dosimetry system presents an uncertainty in resolving the dose of 1.8% for doses greater than 0.8 Gy and less than 6 Gy for red channel. The films irradiated between 0 and 120 Gy show differences in the response when scanned in portrait or landscape mode; less uncertainty was found when using the portrait mode. The response of the film depended on the position on the bed of the scanner, contributing an uncertainty of 2% for the red, 3% for the green, and 4.5% for the blue when placing the film around the center of the bed of scanner. Furthermore, the uniformity and reproducibility radiochromic film and reproducibility of the response of the scanner contribute less than 1% to the overall uncertainty in dose. Finally, the total dose uncertainty was 3.2%, 4.9%, and 5.2% for red, green, and blue channels, respectively. The above uncertainty values were obtained by mini-mizing the contribution to the total dose uncertainty

  4. The Total Cross Section at the LHC: Models and Experimental Consequences

    CERN Document Server

    Cudell, J R

    2010-01-01

    I review the predictions of the total cross section for many models, and point out that some of them lead to the conclusion that the standard experimental analysis may lead to systematic errors much larger than expected.

  5. Estimation of uncertainty of a reference material for proficiency testing for the determination of total mercury in fish in natura; Estimativa da incerteza de um material de referencia para ensaios de proficiencia para a determinacao de mercurio total em pescado in natura

    Energy Technology Data Exchange (ETDEWEB)

    Santana, L.V.; Sarkis, J.E.S.; Ulrich, J.C.; Hortellani, M.A., E-mail: santana-luciana@ig.com.br, E-mail: jesarkis@ipen.br, E-mail: jculrich@ipen.br, E-mail: mahortel@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    This study presents the uncertainty estimate for characterization, study of homogeneity and stability study obtained in the preparation of a reference material for the determination of total mercury in fish fresh muscle tissue for proficiency testing. The test results for stability were obtained by linear regression and to homogeneity study was obtained by ANOVA-one way showed that the material is homogeneous and stable. The value of total mercury concentration with expanded uncertainty for the material was 0,294 ± 0,089 μg g{sup -}. (author)

  6. Uncertainty Quantification Analysis of Both Experimental and CFD Simulation Data of a Bench-scale Fluidized Bed Gasifier

    Energy Technology Data Exchange (ETDEWEB)

    Shahnam, Mehrdad [National Energy Technology Lab. (NETL), Morgantown, WV (United States). Research and Innovation Center, Energy Conversion Engineering Directorate; Gel, Aytekin [ALPEMI Consulting, LLC, Phoeniz, AZ (United States); Subramaniyan, Arun K. [GE Global Research Center, Niskayuna, NY (United States); Musser, Jordan [National Energy Technology Lab. (NETL), Morgantown, WV (United States). Research and Innovation Center, Energy Conversion Engineering Directorate; Dietiker, Jean-Francois [West Virginia Univ. Research Corporation, Morgantown, WV (United States)

    2017-10-02

    Adequate assessment of the uncertainties in modeling and simulation is becoming an integral part of the simulation based engineering design. The goal of this study is to demonstrate the application of non-intrusive Bayesian uncertainty quantification (UQ) methodology in multiphase (gas-solid) flows with experimental and simulation data, as part of our research efforts to determine the most suited approach for UQ of a bench scale fluidized bed gasifier. UQ analysis was first performed on the available experimental data. Global sensitivity analysis performed as part of the UQ analysis shows that among the three operating factors, steam to oxygen ratio has the most influence on syngas composition in the bench-scale gasifier experiments. An analysis for forward propagation of uncertainties was performed and results show that an increase in steam to oxygen ratio leads to an increase in H2 mole fraction and a decrease in CO mole fraction. These findings are in agreement with the ANOVA analysis performed in the reference experimental study. Another contribution in addition to the UQ analysis is the optimization-based approach to guide to identify next best set of additional experimental samples, should the possibility arise for additional experiments. Hence, the surrogate models constructed as part of the UQ analysis is employed to improve the information gain and make incremental recommendation, should the possibility to add more experiments arise. In the second step, series of simulations were carried out with the open-source computational fluid dynamics software MFiX to reproduce the experimental conditions, where three operating factors, i.e., coal flow rate, coal particle diameter, and steam-to-oxygen ratio, were systematically varied to understand their effect on the syngas composition. Bayesian UQ analysis was performed on the numerical results. As part of Bayesian UQ analysis, a global sensitivity analysis was performed based on the simulation results, which shows

  7. Uncertainty, joint uncertainty, and the quantum uncertainty principle

    International Nuclear Information System (INIS)

    Narasimhachar, Varun; Poostindouz, Alireza; Gour, Gilad

    2016-01-01

    Historically, the element of uncertainty in quantum mechanics has been expressed through mathematical identities called uncertainty relations, a great many of which continue to be discovered. These relations use diverse measures to quantify uncertainty (and joint uncertainty). In this paper we use operational information-theoretic principles to identify the common essence of all such measures, thereby defining measure-independent notions of uncertainty and joint uncertainty. We find that most existing entropic uncertainty relations use measures of joint uncertainty that yield themselves to a small class of operational interpretations. Our notion relaxes this restriction, revealing previously unexplored joint uncertainty measures. To illustrate the utility of our formalism, we derive an uncertainty relation based on one such new measure. We also use our formalism to gain insight into the conditions under which measure-independent uncertainty relations can be found. (paper)

  8. Model uncertainty in safety assessment

    International Nuclear Information System (INIS)

    Pulkkinen, U.; Huovinen, T.

    1996-01-01

    The uncertainty analyses are an essential part of any risk assessment. Usually the uncertainties of reliability model parameter values are described by probability distributions and the uncertainty is propagated through the whole risk model. In addition to the parameter uncertainties, the assumptions behind the risk models may be based on insufficient experimental observations and the models themselves may not be exact descriptions of the phenomena under analysis. The description and quantification of this type of uncertainty, model uncertainty, is the topic of this report. The model uncertainty is characterized and some approaches to model and quantify it are discussed. The emphasis is on so called mixture models, which have been applied in PSAs. Some of the possible disadvantages of the mixture model are addressed. In addition to quantitative analyses, also qualitative analysis is discussed shortly. To illustrate the models, two simple case studies on failure intensity and human error modeling are described. In both examples, the analysis is based on simple mixture models, which are observed to apply in PSA analyses. (orig.) (36 refs., 6 figs., 2 tabs.)

  9. Model uncertainty in safety assessment

    Energy Technology Data Exchange (ETDEWEB)

    Pulkkinen, U; Huovinen, T [VTT Automation, Espoo (Finland). Industrial Automation

    1996-01-01

    The uncertainty analyses are an essential part of any risk assessment. Usually the uncertainties of reliability model parameter values are described by probability distributions and the uncertainty is propagated through the whole risk model. In addition to the parameter uncertainties, the assumptions behind the risk models may be based on insufficient experimental observations and the models themselves may not be exact descriptions of the phenomena under analysis. The description and quantification of this type of uncertainty, model uncertainty, is the topic of this report. The model uncertainty is characterized and some approaches to model and quantify it are discussed. The emphasis is on so called mixture models, which have been applied in PSAs. Some of the possible disadvantages of the mixture model are addressed. In addition to quantitative analyses, also qualitative analysis is discussed shortly. To illustrate the models, two simple case studies on failure intensity and human error modeling are described. In both examples, the analysis is based on simple mixture models, which are observed to apply in PSA analyses. (orig.) (36 refs., 6 figs., 2 tabs.).

  10. Uncertainty quantification in resonance absorption

    International Nuclear Information System (INIS)

    Williams, M.M.R.

    2012-01-01

    We assess the uncertainty in the resonance escape probability due to uncertainty in the neutron and radiation line widths for the first 21 resonances in 232 Th as given by . Simulation, quadrature and polynomial chaos methods are used and the resonance data are assumed to obey a beta distribution. We find the uncertainty in the total resonance escape probability to be the equivalent, in reactivity, of 75–130 pcm. Also shown are pdfs of the resonance escape probability for each resonance and the variation of the uncertainty with temperature. The viability of the polynomial chaos expansion method is clearly demonstrated.

  11. Uncertainties in the Norwegian greenhouse gas emission inventory

    Energy Technology Data Exchange (ETDEWEB)

    Flugsrud, Ketil; Hoem, Britta

    2011-11-15

    The national greenhouse gas (GHG) emission inventory is compiled from estimates based on emission factors and activity data and from direct measurements by plants. All these data and parameters will contribute to the overall inventory uncertainty. The uncertainties and probability distributions of the inventory input parameters have been assessed based on available data and expert judgements.Finally, the level and trend uncertainties of the national GHG emission inventory have been estimated using Monte Carlo simulation. The methods used in the analysis correspond to an IPCC tier 2 method, as described in the IPCC Good Practice Guidance (IPCC 2000) (IPCC 2000). Analyses have been made both excluding and including the sector LULUCF (land use, land-use change and forestry). The uncertainty analysis performed in 2011 is an update of the uncertainty analyses performed for the greenhouse gas inventory in 2006 and 2000. During the project we have been in contact with experts, and have collected information about uncertainty from them. Main focus has been on the source categories where changes have occured since the last uncertainty analysis was performed in 2006. This includes new methodology for several source categories (for example for solvents and road traffic) as well as revised uncertainty estimates. For the installations included in the emission trading system, new information from the annual ETS reports about uncertainty in activity data and CO2 emission factor (and N2O emission factor for nitric acid production) has been used. This has improved the quality of the uncertainty estimates for the energy and manufacturing sectors. The results show that the uncertainty level in the total calculated greenhouse gas emissions for 2009 is around 4 per cent. When including the LULUCF sector, the total uncertainty is around 17 per cent in 2009. The uncertainty estimate is lower now than previous analyses have shown. This is partly due to a considerable work made to improve

  12. An experimental study on total dose effects in SRAM-based FPGAs

    International Nuclear Information System (INIS)

    Yao Zhibin; He Baoping; Zhang Fengqi; Guo Hongxia; Luo Yinhong; Wang Yuanming; Zhang Keying

    2009-01-01

    In order to study testing methods and find sensitive parameters in total dose effects on SRAM-based FPGA, XC2S100 chips were irradiated by 60 Co γ-rays and tested with two test circuit designs. By analyzing the experimental results, the test flow of configuration RAM and bock RAM was given, and the most sensitive parameter was obtained. The results will be a solid foundation for establishing test specification and evaluation methods of total dose effects on SRAM-based FPGAs. (authors)

  13. Testing methodologies for quantifying physical models uncertainties. A comparative exercise using CIRCE and IPREM (FFTBM)

    Energy Technology Data Exchange (ETDEWEB)

    Freixa, Jordi, E-mail: jordi.freixa-terradas@upc.edu; Alfonso, Elsa de, E-mail: elsa.de.alfonso@upc.edu; Reventós, Francesc, E-mail: francesc.reventos@upc.edu

    2016-08-15

    Highlights: • Uncertainty of physical models are a key issue in Best estimate plus uncertainty analysis. • Estimation of uncertainties of physical models of thermal hydraulics system codes. • Comparison of CIRCÉ and FFTBM methodologies. • Simulation of reflood experiments in order to evaluate uncertainty of physical models related to the reflood scenario. - Abstract: The increasing importance of Best-Estimate Plus Uncertainty (BEPU) analyses in nuclear safety and licensing processes have lead to several international activities. The latest findings highlighted the uncertainties of physical models as one of the most controversial aspects of BEPU. This type of uncertainties is an important contributor to the total uncertainty of NPP BE calculations. Due to the complexity of estimating this uncertainty, it is often assessed solely by engineering judgment. The present study comprises a comparison of two different state-of-the-art methodologies CIRCÉ and IPREM (FFTBM) capable of quantifying the uncertainty of physical models. Similarities and differences of their results are discussed through the observation of probability distribution functions and envelope calculations. In particular, the analyzed scenario is core reflood. Experimental data from the FEBA and PERICLES test facilities is employed while the thermal hydraulic simulations are carried out with RELAP5/mod3.3. This work is undertaken under the framework of PREMIUM (Post-BEMUSE Reflood Model Input Uncertainty Methods) benchmark.

  14. Quantification of tomographic PIV uncertainty using controlled experimental measurements.

    Science.gov (United States)

    Liu, Ning; Wu, Yue; Ma, Lin

    2018-01-20

    The goal of this work was to experimentally quantify the uncertainty of three-dimensional (3D) and three-component (3C) velocity measurements using tomographic particle image velocimetry (tomo-PIV). Controlled measurements were designed using tracer particles embedded in a solid sample, and tomo-PIV measurements were performed on the sample while it was moved both translationally and rotationally to simulate various known displacement fields, so the 3D3C displacements measured by tomo-PIV can be directly compared to the known displacements created by the sample. The results illustrated that (1) the tomo-PIV technique was able to reconstruct the 3D3C velocity with an averaged error of 0.8-1.4 voxels in terms of magnitude and 1.7°-1.9° in terms of orientation for the velocity fields tested; (2) view registration (VR) plays a significant role in tomo-PIV, and by reducing VR error from 0.6° to 0.1°, the 3D3C measurement accuracy can be improved by at least 2.5 times in terms of both magnitude and orientation; and (3) the use of additional cameras in tomo-PIV can extend the 3D3C velocity measurement to a larger volume, while maintaining acceptable accuracy. These results obtained from controlled tests are expected to aid the error analysis and the design of tomo-PIV measurements.

  15. Uncertainty and sensitivity analysis on probabilistic safety assessment of an experimental facility

    International Nuclear Information System (INIS)

    Burgazzi, L.

    2000-01-01

    The aim of this work is to perform an uncertainty and sensitivity analysis on the probabilistic safety assessment of the International Fusion Materials Irradiation Facility (IFMIF), in order to assess the effect on the final risk values of the uncertainties associated with the generic data used for the initiating events and component reliability and to identify the key quantities contributing to this uncertainty. The analysis is conducted on the expected frequency calculated for the accident sequences, defined through the event tree (ET) modeling. This is in order to increment credit to the ET model quantification, to calculate frequency distributions for the occurrence of events and, consequently, to assess if sequences have been correctly selected on the probability standpoint and finally to verify the fulfillment of the safety conditions. Uncertainty and sensitivity analysis are performed using respectively Monte Carlo sampling and an importance parameter technique. (author)

  16. Radiofrequency Electromagnetic Radiation and Memory Performance: Sources of Uncertainty in Epidemiological Cohort Studies.

    Science.gov (United States)

    Brzozek, Christopher; Benke, Kurt K; Zeleke, Berihun M; Abramson, Michael J; Benke, Geza

    2018-03-26

    Uncertainty in experimental studies of exposure to radiation from mobile phones has in the past only been framed within the context of statistical variability. It is now becoming more apparent to researchers that epistemic or reducible uncertainties can also affect the total error in results. These uncertainties are derived from a wide range of sources including human error, such as data transcription, model structure, measurement and linguistic errors in communication. The issue of epistemic uncertainty is reviewed and interpreted in the context of the MoRPhEUS, ExPOSURE and HERMES cohort studies which investigate the effect of radiofrequency electromagnetic radiation from mobile phones on memory performance. Research into this field has found inconsistent results due to limitations from a range of epistemic sources. Potential analytic approaches are suggested based on quantification of epistemic error using Monte Carlo simulation. It is recommended that future studies investigating the relationship between radiofrequency electromagnetic radiation and memory performance pay more attention to treatment of epistemic uncertainties as well as further research into improving exposure assessment. Use of directed acyclic graphs is also encouraged to display the assumed covariate relationship.

  17. Radiofrequency Electromagnetic Radiation and Memory Performance: Sources of Uncertainty in Epidemiological Cohort Studies

    Directory of Open Access Journals (Sweden)

    Christopher Brzozek

    2018-03-01

    Full Text Available Uncertainty in experimental studies of exposure to radiation from mobile phones has in the past only been framed within the context of statistical variability. It is now becoming more apparent to researchers that epistemic or reducible uncertainties can also affect the total error in results. These uncertainties are derived from a wide range of sources including human error, such as data transcription, model structure, measurement and linguistic errors in communication. The issue of epistemic uncertainty is reviewed and interpreted in the context of the MoRPhEUS, ExPOSURE and HERMES cohort studies which investigate the effect of radiofrequency electromagnetic radiation from mobile phones on memory performance. Research into this field has found inconsistent results due to limitations from a range of epistemic sources. Potential analytic approaches are suggested based on quantification of epistemic error using Monte Carlo simulation. It is recommended that future studies investigating the relationship between radiofrequency electromagnetic radiation and memory performance pay more attention to treatment of epistemic uncertainties as well as further research into improving exposure assessment. Use of directed acyclic graphs is also encouraged to display the assumed covariate relationship.

  18. Evaluation of the uncertainty in an EBT3 film dosimetry system utilizing net optical density

    Science.gov (United States)

    Marroquin, Elsa Y. León; Herrera González, José A.; Camacho López, Miguel A.; Barajas, José E. Villarreal

    2016-01-01

    Radiochromic film has become an important tool to verify dose distributions for intensity‐modulated radiotherapy (IMRT) and quality assurance (QA) procedures. A new radiochromic film model, EBT3, has recently become available, whose composition and thickness of the sensitive layer are the same as those of previous EBT2 films. However, a matte polyester layer was added to EBT3 to prevent the formation of Newton's rings. Furthermore, the symmetrical design of EBT3 allows the user to eliminate side‐orientation dependence. This film and the flatbed scanner, Epson Perfection V750, form a dosimetry system whose intrinsic characteristics were studied in this work. In addition, uncertainties associated with these intrinsic characteristics and the total uncertainty of the dosimetry system were determined. The analysis of the response of the radiochromic film (net optical density) and the fitting of the experimental data to a potential function yielded an uncertainty of 2.6%, 4.3%, and 4.1% for the red, green, and blue channels, respectively. In this work, the dosimetry system presents an uncertainty in resolving the dose of 1.8% for doses greater than 0.8 Gy and less than 6 Gy for red channel. The films irradiated between 0 and 120 Gy show differences in the response when scanned in portrait or landscape mode; less uncertainty was found when using the portrait mode. The response of the film depended on the position on the bed of the scanner, contributing an uncertainty of 2% for the red, 3% for the green, and 4.5% for the blue when placing the film around the center of the bed of scanner. Furthermore, the uniformity and reproducibility radiochromic film and reproducibility of the response of the scanner contribute less than 1% to the overall uncertainty in dose. Finally, the total dose uncertainty was 3.2%, 4.9%, and 5.2% for red, green, and blue channels, respectively. The above uncertainty values were obtained by minimizing the contribution to the total dose

  19. Aleatoric and epistemic uncertainties in sampling based nuclear data uncertainty and sensitivity analyses

    International Nuclear Information System (INIS)

    Zwermann, W.; Krzykacz-Hausmann, B.; Gallner, L.; Klein, M.; Pautz, A.; Velkov, K.

    2012-01-01

    Sampling based uncertainty and sensitivity analyses due to epistemic input uncertainties, i.e. to an incomplete knowledge of uncertain input parameters, can be performed with arbitrary application programs to solve the physical problem under consideration. For the description of steady-state particle transport, direct simulations of the microscopic processes with Monte Carlo codes are often used. This introduces an additional source of uncertainty, the aleatoric sampling uncertainty, which is due to the randomness of the simulation process performed by sampling, and which adds to the total combined output sampling uncertainty. So far, this aleatoric part of uncertainty is minimized by running a sufficiently large number of Monte Carlo histories for each sample calculation, thus making its impact negligible as compared to the impact from sampling the epistemic uncertainties. Obviously, this process may cause high computational costs. The present paper shows that in many applications reliable epistemic uncertainty results can also be obtained with substantially lower computational effort by performing and analyzing two appropriately generated series of samples with much smaller number of Monte Carlo histories each. The method is applied along with the nuclear data uncertainty and sensitivity code package XSUSA in combination with the Monte Carlo transport code KENO-Va to various critical assemblies and a full scale reactor calculation. It is shown that the proposed method yields output uncertainties and sensitivities equivalent to the traditional approach, with a high reduction of computing time by factors of the magnitude of 100. (authors)

  20. Optical Model and Cross Section Uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Herman,M.W.; Pigni, M.T.; Dietrich, F.S.; Oblozinsky, P.

    2009-10-05

    Distinct minima and maxima in the neutron total cross section uncertainties were observed in model calculations using spherical optical potential. We found this oscillating structure to be a general feature of quantum mechanical wave scattering. Specifically, we analyzed neutron interaction with 56Fe from 1 keV up to 65 MeV, and investigated physical origin of the minima.We discuss their potential importance for practical applications as well as the implications for the uncertainties in total and absorption cross sections.

  1. Computational Fluid Dynamics Uncertainty Analysis Applied to Heat Transfer over a Flat Plate

    Science.gov (United States)

    Groves, Curtis Edward; Ilie, Marcel; Schallhorn, Paul A.

    2013-01-01

    There have been few discussions on using Computational Fluid Dynamics (CFD) without experimental validation. Pairing experimental data, uncertainty analysis, and analytical predictions provides a comprehensive approach to verification and is the current state of the art. With pressed budgets, collecting experimental data is rare or non-existent. This paper investigates and proposes a method to perform CFD uncertainty analysis only from computational data. The method uses current CFD uncertainty techniques coupled with the Student-T distribution to predict the heat transfer coefficient over a at plate. The inputs to the CFD model are varied from a specified tolerance or bias error and the difference in the results are used to estimate the uncertainty. The variation in each input is ranked from least to greatest to determine the order of importance. The results are compared to heat transfer correlations and conclusions drawn about the feasibility of using CFD without experimental data. The results provide a tactic to analytically estimate the uncertainty in a CFD model when experimental data is unavailable

  2. Quantifying reactor safety margins: Application of CSAU [Code Scalability, Applicability and Uncertainty] methodology to LBLOCA: Part 3, Assessment and ranging of parameters for the uncertainty analysis of LBLOCA codes

    International Nuclear Information System (INIS)

    Wulff, W.; Boyack, B.E.; Duffey, R.B.

    1988-01-01

    Comparisons of results from TRAC-PF1/MOD1 code calculations with measurements from Separate Effects Tests, and published experimental data for modeling parameters have been used to determine the uncertainty ranges of code input and modeling parameters which dominate the uncertainty in predicting the Peak Clad Temperature for a postulated Large Break Loss of Coolant Accident (LBLOCA) in a four-loop Westinghouse Pressurized Water Reactor. The uncertainty ranges are used for a detailed statistical analysis to calculate the probability distribution function for the TRAC code-predicted Peak Clad Temperature, as is described in an attendant paper. Measurements from Separate Effects Tests and Integral Effects Tests have been compared with results from corresponding TRAC-PF1/MOD1 code calculations to determine globally the total uncertainty in predicting the Peak Clad Temperature for LBLOCAs. This determination is in support of the detailed statistical analysis mentioned above. The analyses presented here account for uncertainties in input parameters, in modeling and scaling, in computing and in measurements. The analyses are an important part of the work needed to implement the Code Scalability, Applicability and Uncertainty (CSAU) methodology. CSAU is needed to determine the suitability of a computer code for reactor safety analyses and the uncertainty in computer predictions. The results presented here are used to estimate the safety margin of a particular nuclear reactor power plant for a postulated accident. 25 refs., 10 figs., 11 tabs

  3. Recent experimental results on level densities for compound reaction calculations

    International Nuclear Information System (INIS)

    Voinov, A.V.

    2012-01-01

    There is a problem related to the choice of the level density input for Hauser-Feshbach model calculations. Modern computer codes have several options to choose from but it is not clear which of them has to be used in some particular cases. Availability of many options helps to describe existing experimental data but it creates problems when it comes to predictions. Traditionally, different level density systematics are based on experimental data from neutron resonance spacing which are available for a limited spin interval and one parity only. On the other hand reaction cross section calculations use the total level density. This can create large uncertainties when converting the neutron resonance spacing to the total level density that results in sizable uncertainties in cross section calculations. It is clear now that total level densities need to be studied experimentally in a systematic manner. Such information can be obtained only from spectra of compound nuclear reactions. The question is does level densities obtained from compound nuclear reactions keep the same regularities as level densities obtained from neutron resonances- Are they consistent- We measured level densities of 59-64 Ni isotopes from proton evaporation spectra of 6,7 Li induced reactions. Experimental data are presented. Conclusions of how level density depends on the neutron number and on the degree of proximity to the closed shell ( 56 Ni) are drawn. The level density parameters have been compared with parameters obtained from the analysis of neutron resonances and from model predictions

  4. Research of relationship between uncertainty and investment

    Institute of Scientific and Technical Information of China (English)

    MENG Li; WANG Ding-wei

    2005-01-01

    This study focuses on revealing the relationship between uncertainty and investment probability through real option model involving investment critical trigger and project earning. Use of Matlab software on the experimental results showing that project earning volatility influences investment probability, led the authors to conclude that this notion is not always correct, as increasing uncertainty should have an inhibiting effect on investment, and that in certain situation, increasing uncertainty actually increases the investment probability and so, should have positive impact on investment.

  5. Simplified propagation of standard uncertainties

    International Nuclear Information System (INIS)

    Shull, A.H.

    1997-01-01

    An essential part of any measurement control program is adequate knowledge of the uncertainties of the measurement system standards. Only with an estimate of the standards'' uncertainties can one determine if the standard is adequate for its intended use or can one calculate the total uncertainty of the measurement process. Purchased standards usually have estimates of uncertainty on their certificates. However, when standards are prepared and characterized by a laboratory, variance propagation is required to estimate the uncertainty of the standard. Traditional variance propagation typically involves tedious use of partial derivatives, unfriendly software and the availability of statistical expertise. As a result, the uncertainty of prepared standards is often not determined or determined incorrectly. For situations meeting stated assumptions, easier shortcut methods of estimation are now available which eliminate the need for partial derivatives and require only a spreadsheet or calculator. A system of simplifying the calculations by dividing into subgroups of absolute and relative uncertainties is utilized. These methods also incorporate the International Standards Organization (ISO) concepts for combining systematic and random uncertainties as published in their Guide to the Expression of Measurement Uncertainty. Details of the simplified methods and examples of their use are included in the paper

  6. Uncertainty, God, and scrupulosity: Uncertainty salience and priming God concepts interact to cause greater fears of sin.

    Science.gov (United States)

    Fergus, Thomas A; Rowatt, Wade C

    2015-03-01

    Difficulties tolerating uncertainty are considered central to scrupulosity, a moral/religious presentation of obsessive-compulsive disorder (OCD). We examined whether uncertainty salience (i.e., exposure to a state of uncertainty) caused fears of sin and fears of God, as well as whether priming God concepts affected the impact of uncertainty salience on those fears. An internet sample of community adults (N = 120) who endorsed holding a belief in God or a higher power were randomly assigned to an experimental manipulation of (1) salience (uncertainty or insecurity) and (2) prime (God concepts or neutral). As predicted, participants who received the uncertainty salience and God concept priming reported the greatest fears of sin. There were no mean-level differences in the other conditions. The effect was not attributable to religiosity and the manipulations did not cause negative affect. We used a nonclinical sample recruited from the internet. These results support cognitive-behavioral models suggesting that religious uncertainty is important to scrupulosity. Implications of these results for future research are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Neutronics experimental validation of the Jules Horowitz reactor fuel by interpretation of the VALMONT experimental program-transposition of the uncertainties on the reactivity of JHR with JEF2.2 and JEFF3.1.1

    International Nuclear Information System (INIS)

    Leray, O.; Hudelot, J.P.; Doederlein, C.; Vaglio-Gaudard, C.; Antony, M.; Santamarina, A.; Bernard, D.

    2012-01-01

    The new European material testing Jules Horowitz Reactor (JHR), currently under construction in Cadarache center (CEA France), will use LEU (20% enrichment in 235 U) fuels (U 3 Si 2 for the start up and UMoAl in the future) which are quite different from the industrial oxide fuel, for which an extensive neutronics experimental validation database has been established. The HORUS3D/N neutronics calculation scheme, used for the design and safety studies of the JHR, is being developed within the framework of a rigorous verification-numerical validation-experimental validation methodology. In this framework, the experimental VALMONT (Validation of Aluminium Molybdenum uranium fuel for Neutronics) program has been performed in the MINERVE facility of CEA Cadarache (France), in order to qualify the capability of HORUS3D/N to accurately calculate the reactivity of the JHR reactor. The MINERVE facility using the oscillation technique provides accurate measurements of reactivity effect of samples. The VALMONT program includes oscillations of samples of UAl ∞ /Al and UMo/Al with enrichments ranging from 0.2% to 20% and Uranium densities from 2.2 to 8 g/cm 3 . The geometry of the samples and the pitch of the experimental lattice ensure maximum representativeness with the neutron spectrum expected for JHR. By comparing the effect of the sample with the one of a known fuel specimen, the reactivity effect can be measured in absolute terms and be compared to computational results. Special attention was paid to the rigorous determination and reduction of the experimental uncertainties. The calculational analysis of the VALMONT results was performed with the French deterministic code APOLLO2. A comparison of the impact of the different calculation methods, data libraries and energy meshes that were tested is presented. The interpretation of the VALMONT experimental program allowed the experimental validation of JHR fuel UMoAl8 (with an enrichment of 19.75% 235 U) by the Minerve

  8. Micropollutants throughout an integrated urban drainage model: Sensitivity and uncertainty analysis

    Science.gov (United States)

    Mannina, Giorgio; Cosenza, Alida; Viviani, Gaspare

    2017-11-01

    The paper presents the sensitivity and uncertainty analysis of an integrated urban drainage model which includes micropollutants. Specifically, a bespoke integrated model developed in previous studies has been modified in order to include the micropollutant assessment (namely, sulfamethoxazole - SMX). The model takes into account also the interactions between the three components of the system: sewer system (SS), wastewater treatment plant (WWTP) and receiving water body (RWB). The analysis has been applied to an experimental catchment nearby Palermo (Italy): the Nocella catchment. Overall, five scenarios, each characterized by different uncertainty combinations of sub-systems (i.e., SS, WWTP and RWB), have been considered applying, for the sensitivity analysis, the Extended-FAST method in order to select the key factors affecting the RWB quality and to design a reliable/useful experimental campaign. Results have demonstrated that sensitivity analysis is a powerful tool for increasing operator confidence in the modelling results. The approach adopted here can be used for blocking some non-identifiable factors, thus wisely modifying the structure of the model and reducing the related uncertainty. The model factors related to the SS have been found to be the most relevant factors affecting the SMX modeling in the RWB when all model factors (scenario 1) or model factors of SS (scenarios 2 and 3) are varied. If the only factors related to the WWTP are changed (scenarios 4 and 5), the SMX concentration in the RWB is mainly influenced (till to 95% influence of the total variance for SSMX,max) by the aerobic sorption coefficient. A progressive uncertainty reduction from the upstream to downstream was found for the soluble fraction of SMX in the RWB.

  9. Evaluating uncertainty in 7Be-based soil erosion estimates: an experimental plot approach

    Science.gov (United States)

    Blake, Will; Taylor, Alex; Abdelli, Wahid; Gaspar, Leticia; Barri, Bashar Al; Ryken, Nick; Mabit, Lionel

    2014-05-01

    Soil erosion remains a major concern for the international community and there is a growing need to improve the sustainability of agriculture to support future food security. High resolution soil erosion data are a fundamental requirement for underpinning soil conservation and management strategies but representative data on soil erosion rates are difficult to achieve by conventional means without interfering with farming practice and hence compromising the representativeness of results. Fallout radionuclide (FRN) tracer technology offers a solution since FRN tracers are delivered to the soil surface by natural processes and, where irreversible binding can be demonstrated, redistributed in association with soil particles. While much work has demonstrated the potential of short-lived 7Be (half-life 53 days), particularly in quantification of short-term inter-rill erosion, less attention has focussed on sources of uncertainty in derived erosion measurements and sampling strategies to minimise these. This poster outlines and discusses potential sources of uncertainty in 7Be-based soil erosion estimates and the experimental design considerations taken to quantify these in the context of a plot-scale validation experiment. Traditionally, gamma counting statistics have been the main element of uncertainty propagated and reported but recent work has shown that other factors may be more important such as: (i) spatial variability in the relaxation mass depth that describes the shape of the 7Be depth distribution for an uneroded point; (ii) spatial variability in fallout (linked to rainfall patterns and shadowing) over both reference site and plot; (iii) particle size sorting effects; (iv) preferential mobility of fallout over active runoff contributing areas. To explore these aspects in more detail, a plot of 4 x 35 m was ploughed and tilled to create a bare, sloped soil surface at the beginning of winter 2013/2014 in southwest UK. The lower edge of the plot was bounded by

  10. Estimating the uncertainty in thermochemical calculations for oxygen-hydrogen combustors

    Science.gov (United States)

    Sims, Joseph David

    The thermochemistry program CEA2 was combined with the statistical thermodynamics program PAC99 in a Monte Carlo simulation to determine the uncertainty in several CEA2 output variables due to uncertainty in thermodynamic reference values for the reactant and combustion species. In all, six typical performance parameters were examined, along with the required intermediate calculations (five gas properties and eight stoichiometric coefficients), for three hydrogen-oxygen combustors: a main combustor, an oxidizer preburner and a fuel preburner. The three combustors were analyzed in two different modes: design mode, where, for the first time, the uncertainty in thermodynamic reference values---taken from the literature---was considered (inputs to CEA2 were specified and so had no uncertainty); and data reduction mode, where inputs to CEA2 did have uncertainty. The inputs to CEA2 were contrived experimental measurements that were intended to represent the typical combustor testing facility. In design mode, uncertainties in the performance parameters were on the order of 0.1% for the main combustor, on the order of 0.05% for the oxidizer preburner and on the order of 0.01% for the fuel preburner. Thermodynamic reference values for H2O were the dominant sources of uncertainty, as was the assigned enthalpy for liquid oxygen. In data reduction mode, uncertainties in performance parameters increased significantly as a result of the uncertainties in experimental measurements compared to uncertainties in thermodynamic reference values. Main combustor and fuel preburner theoretical performance values had uncertainties of about 0.5%, while the oxidizer preburner had nearly 2%. Associated experimentally-determined performance values for all three combustors were 3% to 4%. The dominant sources of uncertainty in this mode were the propellant flowrates. These results only apply to hydrogen-oxygen combustors and should not be generalized to every propellant combination. Species for

  11. On the uncertainty principle. V

    International Nuclear Information System (INIS)

    Halpern, O.

    1976-01-01

    The treatment of ideal experiments connected with the uncertainty principle is continued. The author analyzes successively measurements of momentum and position, and discusses the common reason why the results in all cases differ from the conventional ones. A similar difference exists for the measurement of field strengths. The interpretation given by Weizsaecker, who tried to interpret Bohr's complementarity principle by introducing a multi-valued logic is analyzed. The treatment of the uncertainty principle ΔE Δt is deferred to a later paper as is the interpretation of the method of variation of constants. Every ideal experiment discussed shows various lower limits for the value of the uncertainty product which limits depend on the experimental arrangement and are always (considerably) larger than h. (Auth.)

  12. Treatment of experimental myasthenia gravis with total lymphoid irradiation

    International Nuclear Information System (INIS)

    de Silva, S.; Blum, J.E.; McIntosh, K.R.; Order, S.; Drachman, D.B.

    1988-01-01

    Total lymphoid irradiation (TLI) has been reported to be effective in the immunosuppressive treatment of certain human and experimental autoimmune disorders. We have investigated the effects of TLI in Lewis rats with experimental autoimmune myasthenia gravis (EAMG) produced by immunization with purified torpedo acetylcholine receptor (AChR). The radiation is given in 17 divided fractions of 200 rad each, and nonlymphoid tissues are protected by lead shielding. This technique suppresses the immune system, while minimizing side effects, and permits the repopulation of the immune system by the patient's own bone marrow cells. Our results show that TLI treatment completely prevented the primary antibody response to immunization with torpedo AChR, it rapidly abolished the ongoing antibody response in established EAMG, and it suppressed the secondary (anamnestic) response to a boost of AChR. No EAMG animals died during TLI treatment, compared with six control animals that died of EAMG. TLI produces powerful and prompt immunosuppression and may eventually prove useful in the treatment of refractory human myasthenia gravis

  13. Treatment of experimental myasthenia gravis with total lymphoid irradiation

    Energy Technology Data Exchange (ETDEWEB)

    de Silva, S.; Blum, J.E.; McIntosh, K.R.; Order, S.; Drachman, D.B.

    1988-07-01

    Total lymphoid irradiation (TLI) has been reported to be effective in the immunosuppressive treatment of certain human and experimental autoimmune disorders. We have investigated the effects of TLI in Lewis rats with experimental autoimmune myasthenia gravis (EAMG) produced by immunization with purified torpedo acetylcholine receptor (AChR). The radiation is given in 17 divided fractions of 200 rad each, and nonlymphoid tissues are protected by lead shielding. This technique suppresses the immune system, while minimizing side effects, and permits the repopulation of the immune system by the patient's own bone marrow cells. Our results show that TLI treatment completely prevented the primary antibody response to immunization with torpedo AChR, it rapidly abolished the ongoing antibody response in established EAMG, and it suppressed the secondary (anamnestic) response to a boost of AChR. No EAMG animals died during TLI treatment, compared with six control animals that died of EAMG. TLI produces powerful and prompt immunosuppression and may eventually prove useful in the treatment of refractory human myasthenia gravis.

  14. Quantifying uncertainties in precipitation: a case study from Greece

    Directory of Open Access Journals (Sweden)

    C. Anagnostopoulou

    2008-04-01

    Full Text Available The main objective of the present study was the examination and the quantification of the uncertainties in the precipitation time series over the Greek area, for a 42-year time period. The uncertainty index applied to the rainfall data is a combination (total of the departures of the rainfall season length, of the median data of the accumulated percentages and of the total amounts of rainfall. Results of the study indicated that all the stations are characterized, on an average basis, by medium to high uncertainty. The stations that presented an increasing rainfall uncertainty were the ones located mainly to the continental parts of the study region. From the temporal analysis of the uncertainty index, it was demonstrated that the greatest percentage of the years, for all the stations time-series, was characterized by low to high uncertainty (intermediate categories of the index. Most of the results of the uncertainty index for the Greek region are similar to the corresponding results of various stations all over the European region.

  15. Ruminations On NDA Measurement Uncertainty Compared TO DA Uncertainty

    International Nuclear Information System (INIS)

    Salaymeh, S.; Ashley, W.; Jeffcoat, R.

    2010-01-01

    It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.

  16. RUMINATIONS ON NDA MEASUREMENT UNCERTAINTY COMPARED TO DA UNCERTAINTY

    Energy Technology Data Exchange (ETDEWEB)

    Salaymeh, S.; Ashley, W.; Jeffcoat, R.

    2010-06-17

    It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.

  17. Ascertaining the uncertainty relations via quantum correlations

    International Nuclear Information System (INIS)

    Li, Jun-Li; Du, Kun; Qiao, Cong-Feng

    2014-01-01

    We propose a new scheme to express the uncertainty principle in the form of inequality of the bipartite correlation functions for a given multipartite state, which provides an experimentally feasible and model-independent way to verify various uncertainty and measurement disturbance relations. By virtue of this scheme, the implementation of experimental measurement on the measurement disturbance relation to a variety of physical systems becomes practical. The inequality in turn, also imposes a constraint on the strength of correlation, i.e. it determines the maximum value of the correlation function for two-body system and a monogamy relation of the bipartite correlation functions for multipartite system. (paper)

  18. Uncertainty Characterization of Reactor Vessel Fracture Toughness

    International Nuclear Information System (INIS)

    Li, Fei; Modarres, Mohammad

    2002-01-01

    To perform fracture mechanics analysis of reactor vessel, fracture toughness (K Ic ) at various temperatures would be necessary. In a best estimate approach, K Ic uncertainties resulting from both lack of sufficient knowledge and randomness in some of the variables of K Ic must be characterized. Although it may be argued that there is only one type of uncertainty, which is lack of perfect knowledge about the subject under study, as a matter of practice K Ic uncertainties can be divided into two types: aleatory and epistemic. Aleatory uncertainty is related to uncertainty that is very difficult to reduce, if not impossible; epistemic uncertainty, on the other hand, can be practically reduced. Distinction between aleatory and epistemic uncertainties facilitates decision-making under uncertainty and allows for proper propagation of uncertainties in the computation process. Typically, epistemic uncertainties representing, for example, parameters of a model are sampled (to generate a 'snapshot', single-value of the parameters), but the totality of aleatory uncertainties is carried through the calculation as available. In this paper a description of an approach to account for these two types of uncertainties associated with K Ic has been provided. (authors)

  19. Uncertainty in hydrological signatures

    Science.gov (United States)

    McMillan, Hilary; Westerberg, Ida

    2015-04-01

    magnitude and bias, and to test how uncertainty depended on the density of the raingauge network and flow gauging station characteristics. The uncertainties were sometimes large (i.e. typical intervals of ±10-40% relative uncertainty) and highly variable between signatures. Uncertainty in the mean discharge was around ±10% for both catchments, while signatures describing the flow variability had much higher uncertainties in the Mahurangi where there was a fast rainfall-runoff response and greater high-flow rating uncertainty. Event and total runoff ratios had uncertainties from ±10% to ±15% depending on the number of rain gauges used; precipitation uncertainty was related to interpolation rather than point uncertainty. Uncertainty distributions in these signatures were skewed, and meant that differences in signature values between these catchments were often not significant. We hope that this study encourages others to use signatures in a way that is robust to data uncertainty.

  20. Treatment and reporting of uncertainties for environmental radiation measurements

    International Nuclear Information System (INIS)

    Colle, R.

    1980-01-01

    Recommendations for a practical and uniform method for treating and reporting uncertainties in environmental radiation measurements data are presented. The method requires that each reported measurement result include the value, a total propagated random uncertainty expressed as the standard deviation, and a combined overall uncertainty. The uncertainty assessment should be based on as nearly a complete assessment as possible and should include every conceivable or likely source of inaccuracy in the result. Guidelines are given for estimating random and systematic uncertainty components, and for propagating and combining them to form an overall uncertainty

  1. The cerebellum and decision making under uncertainty.

    Science.gov (United States)

    Blackwood, Nigel; Ffytche, Dominic; Simmons, Andrew; Bentall, Richard; Murray, Robin; Howard, Robert

    2004-06-01

    This study aimed to identify the neural basis of probabilistic reasoning, a type of inductive inference that aids decision making under conditions of uncertainty. Eight normal subjects performed two separate two-alternative-choice tasks (the balls in a bottle and personality survey tasks) while undergoing functional magnetic resonance imaging (fMRI). The experimental conditions within each task were chosen so that they differed only in their requirement to make a decision under conditions of uncertainty (probabilistic reasoning and frequency determination required) or under conditions of certainty (frequency determination required). The same visual stimuli and motor responses were used in the experimental conditions. We provide evidence that the neo-cerebellum, in conjunction with the premotor cortex, inferior parietal lobule and medial occipital cortex, mediates the probabilistic inferences that guide decision making under uncertainty. We hypothesise that the neo-cerebellum constructs internal working models of uncertain events in the external world, and that such probabilistic models subserve the predictive capacity central to induction. Copyright 2004 Elsevier B.V.

  2. Assessing student understanding of measurement and uncertainty

    Science.gov (United States)

    Jirungnimitsakul, S.; Wattanakasiwich, P.

    2017-09-01

    The objectives of this study were to develop and assess student understanding of measurement and uncertainty. A test has been adapted and translated from the Laboratory Data Analysis Instrument (LDAI) test, consists of 25 questions focused on three topics including measures of central tendency, experimental errors and uncertainties, and fitting regression lines. The test was evaluated its content validity by three physics experts in teaching physics laboratory. In the pilot study, Thai LDAI was administered to 93 freshmen enrolled in a fundamental physics laboratory course. The final draft of the test was administered to three groups—45 freshmen taking fundamental physics laboratory, 16 sophomores taking intermediated physics laboratory and 21 juniors taking advanced physics laboratory at Chiang Mai University. As results, we found that the freshmen had difficulties in experimental errors and uncertainties. Most students had problems with fitting regression lines. These results will be used to improve teaching and learning physics laboratory for physics students in the department.

  3. A methodology for uncertainty analysis of reference equations of state

    DEFF Research Database (Denmark)

    Cheung, Howard; Frutiger, Jerome; Bell, Ian H.

    We present a detailed methodology for the uncertainty analysis of reference equations of state (EOS) based on Helmholtz energy. In recent years there has been an increased interest in uncertainties of property data and process models of thermal systems. In the literature there are various...... for uncertainty analysis is suggested as a tool for EOS. The uncertainties of the EOS properties are calculated from the experimental values and the EOS model structure through the parameter covariance matrix and subsequent linear error propagation. This allows reporting the uncertainty range (95% confidence...

  4. Total cross sections for heavy flavour production at HERA

    CERN Document Server

    Frixione, Stefano; Nason, P; Ridolfi, G; Frixione, S; Mangano, M L; Nason, P; Ridolfi, G

    1995-01-01

    We compute total cross sections for charm and bottom photoproduction at HERA energies, and discuss the relevant theoretical uncertainties. In particular we discuss the problems arising from the small-x region, the uncertainties in the gluon parton density, and the uncertainties in the hadronic component of the cross section. Total electroproduction cross sections, calculated in the Weizs\\"acker-Williams approximation, are also given.

  5. Final Report, DOE Early Career Award: Predictive modeling of complex physical systems: new tools for statistical inference, uncertainty quantification, and experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2016-08-31

    Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesian inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.

  6. Verification of the uncertainty principle by using diffraction of light waves

    International Nuclear Information System (INIS)

    Nikolic, D; Nesic, Lj

    2011-01-01

    We described a simple idea for experimental verification of the uncertainty principle for light waves. We used a single-slit diffraction of a laser beam for measuring the angular width of zero-order diffraction maximum and obtained the corresponding wave number uncertainty. We will assume that the uncertainty in position is the slit width. For the acquisition of the experimental data and their further analysis, we used a computer. Because of its simplicity this experiment is very suitable for demonstration, as well as for a quantitative exercise at universities and final year of high school studies.

  7. Uncertainty analysis

    International Nuclear Information System (INIS)

    Thomas, R.E.

    1982-03-01

    An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software

  8. Learning about Measurement Uncertainties in Secondary Education: A Model of the Subject Matter

    Science.gov (United States)

    Priemer, Burkhard; Hellwig, Julia

    2018-01-01

    Estimating measurement uncertainties is important for experimental scientific work. However, this is very often neglected in school curricula and teaching practice, even though experimental work is seen as a fundamental part of teaching science. In order to call attention to the relevance of measurement uncertainties, we developed a comprehensive…

  9. The EURACOS activation experiments: preliminary uncertainty analysis

    International Nuclear Information System (INIS)

    Yeivin, Y.

    1982-01-01

    A sequence of counting rates of an irradiated sulphur pellet, r(tsub(i)), measured at different times after the end of the irradiation, are fitted to r(t)=Aexp(-lambda t)+B. A standard adjustment procedure is applied to determine the parameters A and B, their standard deviations and correlation, and chi square. It is demonstrated that if the counting-rate uncertainties are entirely due to the counting statistics, the experimental data are totally inconsistent with the ''theoretical'' model. However, assuming an additional systematic error of approximalety 1%, and eliminating a few ''bad'' data, produces a data set quite consistent with the model. The dependence of chi square on the assumed systematic error and the data elimination procedure are discussed in great detail. A review of the adjustment procedure is appended to the report

  10. Uncertainty analysis of the FRAP code

    International Nuclear Information System (INIS)

    Peck, S.O.

    1978-01-01

    A user oriented, automated uncertainty analysis capability has been built into the Fuel Rod Analysis Program (FRAP) code and has been applied to a pressurized water reactor (PWR) fuel rod undergoing a loss-of-coolant accident (LOCA). The method of uncertainty analysis is the response surface method. The automated version significantly reduced the time required to complete the analysis and, at the same time, greatly increased the problem scope. Results of the analysis showed a significant difference in the total and relative contributions to the uncertainty of the response parameters between steady state and transient conditions

  11. Neutronics experiments for uncertainty assessment of tritium breeding in HCPB and HCLL blanket mock-ups irradiated with 14 MeV neutrons

    International Nuclear Information System (INIS)

    Batistoni, P.; Angelone, M.; Pillon, M.; Villari, R.; Fischer, U.; Klix, A.; Leichtle, D.; Kodeli, I.; Pohorecki, W.

    2012-01-01

    Two neutronics experiments have been carried out at 14 MeV neutron sources on mock-ups of the helium cooled pebble bed (HCBP) and the helium cooled lithium lead (HCLL) variants of ITER test blanket modules (TBMs). These experiments have provided an experimental validation of the calculations of the tritium production rate (TPR) in the two blanket concepts and an assessment of the uncertainties due to the uncertainties on nuclear data. This paper provides a brief summary of the HCPB experiment and then focuses in particular on the final results of the HCLL experiment. The TPR has been measured in the HCLL mock-up irradiated for long times at the Frascati 14 MeV Neutron Generator (FNG). Redundant and well-assessed experimental techniques have been used to measure the TPR by different teams for inter-comparison. Measurements of the neutron and gamma-ray spectra have also been performed. The analysis of the experiment, carried out by the MCNP code with FENDL-2.1 and JEFF-3.1.1 nuclear data libraries, and also including sensitivity/uncertainty analysis, shows good agreement between measurements and calculations, within the total uncertainty of 5.9% at 1σ level. (paper)

  12. A comprehensive combined experimental and computational framework for pre-clinical wear simulation of total knee replacements.

    Science.gov (United States)

    Abdelgaied, A; Fisher, J; Jennings, L M

    2018-02-01

    A more robust pre-clinical wear simulation framework is required in order to simulate wider and higher ranges of activities, observed in different patient populations such as younger more active patients. Such a framework will help to understand and address the reported higher failure rates for younger and more active patients (National_Joint_Registry, 2016). The current study has developed and validated a comprehensive combined experimental and computational framework for pre-clinical wear simulation of total knee replacements (TKR). The input mechanical (elastic modulus and Poisson's ratio) and wear parameters of the moderately cross-linked ultra-high molecular weight polyethylene (UHMWPE) bearing material were independently measured from experimental studies under realistic test conditions, similar to the loading conditions found in the total knee replacements. The wear predictions from the computational wear simulation were validated against the direct experimental wear measurements for size 3 Sigma curved total knee replacements (DePuy, UK) in an independent experimental wear simulation study under three different daily activities; walking, deep squat, and stairs ascending kinematic conditions. The measured compressive mechanical properties of the moderately cross-linked UHMWPE material were more than 20% lower than that reported in the literature under tensile test conditions. The pin-on-plate wear coefficient of moderately cross-linked UHMWPE was significantly dependant of the contact stress and the degree of cross-shear at the articulating surfaces. The computational wear predictions for the TKR from the current framework were consistent and in a good agreement with the independent full TKR experimental wear simulation measurements, with 0.94 coefficient of determination of the framework. In addition, the comprehensive combined experimental and computational framework was able to explain the complex experimental wear trends from the three different daily

  13. Uncertainty Quantification Reveals the Importance of Data Variability and Experimental Design Considerations for in Silico Proarrhythmia Risk Assessment

    Directory of Open Access Journals (Sweden)

    Kelly C. Chang

    2017-11-01

    Full Text Available The Comprehensive in vitro Proarrhythmia Assay (CiPA is a global initiative intended to improve drug proarrhythmia risk assessment using a new paradigm of mechanistic assays. Under the CiPA paradigm, the relative risk of drug-induced Torsade de Pointes (TdP is assessed using an in silico model of the human ventricular action potential (AP that integrates in vitro pharmacology data from multiple ion channels. Thus, modeling predictions of cardiac risk liability will depend critically on the variability in pharmacology data, and uncertainty quantification (UQ must comprise an essential component of the in silico assay. This study explores UQ methods that may be incorporated into the CiPA framework. Recently, we proposed a promising in silico TdP risk metric (qNet, which is derived from AP simulations and allows separation of a set of CiPA training compounds into Low, Intermediate, and High TdP risk categories. The purpose of this study was to use UQ to evaluate the robustness of TdP risk separation by qNet. Uncertainty in the model parameters used to describe drug binding and ionic current block was estimated using the non-parametric bootstrap method and a Bayesian inference approach. Uncertainty was then propagated through AP simulations to quantify uncertainty in qNet for each drug. UQ revealed lower uncertainty and more accurate TdP risk stratification by qNet when simulations were run at concentrations below 5× the maximum therapeutic exposure (Cmax. However, when drug effects were extrapolated above 10× Cmax, UQ showed that qNet could no longer clearly separate drugs by TdP risk. This was because for most of the pharmacology data, the amount of current block measured was <60%, preventing reliable estimation of IC50-values. The results of this study demonstrate that the accuracy of TdP risk prediction depends both on the intrinsic variability in ion channel pharmacology data as well as on experimental design considerations that preclude an

  14. Comparison of total experimental and theoretical absolute γ-ray detection efficiencies of a cylindrical NaI(Tl) crystal

    International Nuclear Information System (INIS)

    Uosif, M.A.; El-Taher, A.

    2005-01-01

    A new fit function has been developed to calculate theoretically the absolute gamma ray detection efficiencies (ηTh) of a cylindrical NaI(Tl) crystal, for calculating the absolute efficiency at any interesting gamma energy in the energy range between 10 and 1300 keV and distance between 0 and 8 cm. The total absolute gamma ray detection efficiencies have been calculated for five detectors, four are 2x2 and one is 3x 3 inches NaI(Tl) crystal at different distances. The absolute efficiency of the different detectors was calculated at the specific energy of the standard sources for each measuring distances. In this calculation, experimental (ηExp) and theoretical (ηTh) have been calculated. The uncertainties of efficiency calibration have been calculated also for quality control. Measurements were performed with calibrated point source. Gamma-ray energies under consideration were 0.356, 0.662, 1.17 and 1.33 MeV. The differences between (ηExp) and (ηTh) at these energies are 1.30E-06, 7.99E-05, 2.29E-04 and 2.42E-04 respectively. The results obtained on the basis of (ηExp) and (ηTh) seem to be in very good agreement

  15. Total Evidence, Uncertainty and A Priori Beliefs

    NARCIS (Netherlands)

    Bewersdorf, Benjamin; Felline, Laura; Ledda, Antonio; Paoli, Francesco; Rossanese, Emanuele

    2016-01-01

    Defining the rational belief state of an agent in terms of her initial or a priori belief state as well as her total evidence can help to address a number of important philosophical problems. In this paper, I discuss how this strategy can be applied to cases in which evidence is uncertain. I argue

  16. Similarity and uncertainty analysis of the ALLEGRO MOX core

    International Nuclear Information System (INIS)

    Vrban, B.; Hascik, J.; Necas, V.; Slugen, V.

    2015-01-01

    The similarity and uncertainty analysis of the ESNII+ ALLEGRO MOX core has identified specific problems and challenges in the field of neutronic calculations. Similarity assessment identified 9 partly comparable experiments where only one reached ck and E values over 0.9. However the Global Integral Index G remains still low (0.75) and cannot be judge das sufficient. The total uncertainty of calculated k eff induced by XS data is according to our calculation 1.04%. The main contributors to this uncertainty are 239 Pu nubar and 238 U inelastic scattering. The additional margin from uncovered sensitivities was determined to be 0.28%. The identified low number of similar experiments prevents the use of advanced XS adjustment and bias estimation methods. More experimental data are needed and presented results may serve as a basic step in development of necessary critical assemblies. Although exact data are not presented in the paper, faster 44 energy group calculation gives almost the same results in similarity analysis in comparison to more complex 238 group calculation. Finally, it was demonstrated that TSUNAMI-IP utility can play a significant role in the future fast reactor development in Slovakia and in the Visegrad region. Clearly a further Research and Development and strong effort should be carried out in order to receive more complex methodology consisting of more plausible covariance data and related quantities. (authors)

  17. Effect of uncertainty in pore volumes on the uncertainty in amount adsorbed at high-pressures on activated carbon cloth

    International Nuclear Information System (INIS)

    Pendleton, Ph.; Badalyan, A.

    2005-01-01

    Activated carbon cloth (ACC) is a good adsorbent for high rate adsorption of volatile organic carbons [1] and as a storage media for methane [2]. It has been shown [2] that the capacity of ACC to adsorb methane, in the first instance, depends on its micropore volume. One way of increasing this storage capacity is to increase micropore volume [3]. Therefore, the uncertainty in the determination of ACC micropore volume becomes a very important factor, since it affects the uncertainty of amount adsorbed at high-pressures, which usually accompany storage of methane on ACC. Recently, we developed a method for the calculation of experimental uncertainty in micropore volume using low pressure nitrogen adsorption data at 77 K for FM1/250 ACC (ex. Calgon, USA). We tested several cubic equations of state (EOS) and multiple parameter (EOS) to determine the amount of high-pressure nitrogen adsorbed, and compared these data with amounts calculated via interpolated NIST density data. The amount adsorbed calculated from interpolated NIST density data exhibit the lowest propagated combined uncertainty. Values of relative combined standard uncertainty for FM1/250 calculated using a weighted, mean-least-squares method applied to the low-pressure nitrogen adsorption data (Fig. 1) gave 3.52% for the primary micropore volume and 1.63% for the total micropore volume. Our equipment allows the same sample to be exposed to nitrogen (and other gases) at pressures from 10 -4 Pa to 17-MPa in the temperature range from 176 to 252 K. The maximum uptake of nitrogen was 356-mmol/g at 201.92 K and 15.8-MPa (Fig. 2). The delivery capacity of ACC is determined by the amount of adsorbed gas recovered when the pressure is reduced from that for maximum adsorption to 0.1-MPa [2]. In this regard, the total micropore volume becomes an important parameter in determining the amount of gas delivered during desorption. In the present paper we will discuss the effect of uncertainty in micropore volume

  18. Evaluating measurement uncertainty in fluid phase equilibrium calculations

    Science.gov (United States)

    van der Veen, Adriaan M. H.

    2018-04-01

    The evaluation of measurement uncertainty in accordance with the ‘Guide to the expression of uncertainty in measurement’ (GUM) has not yet become widespread in physical chemistry. With only the law of the propagation of uncertainty from the GUM, many of these uncertainty evaluations would be cumbersome, as models are often non-linear and require iterative calculations. The methods from GUM supplements 1 and 2 enable the propagation of uncertainties under most circumstances. Experimental data in physical chemistry are used, for example, to derive reference property data and support trade—all applications where measurement uncertainty plays an important role. This paper aims to outline how the methods for evaluating and propagating uncertainty can be applied to some specific cases with a wide impact: deriving reference data from vapour pressure data, a flash calculation, and the use of an equation-of-state to predict the properties of both phases in a vapour-liquid equilibrium. The three uncertainty evaluations demonstrate that the methods of GUM and its supplements are a versatile toolbox that enable us to evaluate the measurement uncertainty of physical chemical measurements, including the derivation of reference data, such as the equilibrium thermodynamical properties of fluids.

  19. Sensitivity/uncertainty analysis for the Hiroshima dosimetry reevaluation effort

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Lillie, R.A.; Pace, J.V. III; Cacuci, D.G.

    1987-01-01

    Uncertainty estimates and cross correlations by range/survivor location have been obtained for the free-in-air (FIA) tissue kerma for the Hiroshima atomic event. These uncertainties in the FIA kerma include contributions due to various modeling parameters and the basic cross section data and are given at three ground ranges, 700, 1000 and 1500 m. The estimated uncertainties are nearly constant over the given ground ranges and are approximately 27% for the prompt neutron kerma and secondary gamma kerma and 35% for the prompt gamma kerma. The total kerma uncertainty is dominated by the secondary gamma kerma uncertainties which are in turn largely due to the modeling parameter uncertainties

  20. Uncertainty contributions to low flow projections in Austria

    Science.gov (United States)

    Parajka, J.; Blaschke, A. P.; Blöschl, G.; Haslinger, K.; Hepp, G.; Laaha, G.; Schöner, W.; Trautvetter, H.; Viglione, A.; Zessner, M.

    2015-11-01

    The main objective of the paper is to understand the contributions to the uncertainty in low flow projections resulting from hydrological model uncertainty and climate projection uncertainty. Model uncertainty is quantified by different parameterizations of a conceptual semi-distributed hydrologic model (TUWmodel) using 11 objective functions in three different decades (1976-1986, 1987-1997, 1998-2008), which allows disentangling the effect of modeling uncertainty and temporal stability of model parameters. Climate projection uncertainty is quantified by four future climate scenarios (ECHAM5-A1B, A2, B1 and HADCM3-A1B) using a delta change approach. The approach is tested for 262 basins in Austria. The results indicate that the seasonality of the low flow regime is an important factor affecting the performance of model calibration in the reference period and the uncertainty of Q95 low flow projections in the future period. In Austria, the calibration uncertainty in terms of Q95 is larger in basins with summer low flow regime than in basins with winter low flow regime. Using different calibration periods may result in a range of up to 60 % in simulated Q95 low flows. The low flow projections show an increase of low flows in the Alps, typically in the range of 10-30 % and a decrease in the south-eastern part of Austria mostly in the range -5 to -20 % for the period 2021-2050 relative the reference period 1976-2008. The change in seasonality varies between scenarios, but there is a tendency for earlier low flows in the Northern Alps and later low flows in Eastern Austria. In 85 % of the basins, the uncertainty in Q95 from model calibration is larger than the uncertainty from different climate scenarios. The total uncertainty of Q95 projections is the largest in basins with winter low flow regime and, in some basins, exceeds 60 %. In basins with summer low flows and the total uncertainty is mostly less than 20 %. While the calibration uncertainty dominates over climate

  1. Extensive neutronic sensitivity-uncertainty analysis of a fusion reactor shielding blanket

    International Nuclear Information System (INIS)

    Hogenbirk, A.

    1994-01-01

    In this paper the results are presented of an extensive neutronic sensitivity-uncertainty study performed for the design of a shielding blanket for a next-step fusion reactor, such as ITER. A code system was used, which was developed at ECN Petten. The uncertainty in an important response parameter, the neutron heating in the inboard superconducting coils, was evaluated. Neutron transport calculations in the 100 neutron group GAM-II structure were performed using the code ANISN. For the sensitivity and uncertainty calculations the code SUSD was used. Uncertainties due to cross-section uncertainties were taken into account as well as uncertainties due to uncertainties in energy and angular distributions of scattered neutrons (SED and SAD uncertainties, respectively). The subject of direct-term uncertainties (i.e. uncertainties due to uncertainties in the kerma factors of the superconducting coils) is briefly touched upon. It is shown that SAD uncertainties, which have been largely neglected until now, contribute significantly to the total uncertainty. Moreover, the contribution of direct-term uncertainties may be large. The total uncertainty in the neutron heating, only due to Fe cross-sections, amounts to approximately 25%, which is rather large. However, uncertainty data are scarce and the data may very well be conservative. It is shown in this paper that with the code system used, sensitivity and uncertainty calculations can be performed in a straightforward way. Therefore, it is suggested that emphasis is now put on the generation of realistic, reliable covariance data for cross-sections as well as for angular and energy distributions. ((orig.))

  2. SU-G-BRB-14: Uncertainty of Radiochromic Film Based Relative Dose Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Devic, S; Tomic, N; DeBlois, F; Seuntjens, J [McGill University, Montreal, QC (Canada); Lewis, D [RCF Consulting, LLC, Monroe, CT (United States); Aldelaijan, S [King Faisal Specialist Hospital & Research Center, Riyadh (Saudi Arabia)

    2016-06-15

    Purpose: Due to inherently non-linear dose response, measurement of relative dose distribution with radiochromic film requires measurement of absolute dose using a calibration curve following previously established reference dosimetry protocol. On the other hand, a functional form that converts the inherently non-linear dose response curve of the radiochromic film dosimetry system into linear one has been proposed recently [Devic et al, Med. Phys. 39 4850–4857 (2012)]. However, there is a question what would be the uncertainty of such measured relative dose. Methods: If the relative dose distribution is determined going through the reference dosimetry system (conversion of the response by using calibration curve into absolute dose) the total uncertainty of such determined relative dose will be calculated by summing in quadrature total uncertainties of doses measured at a given and at the reference point. On the other hand, if the relative dose is determined using linearization method, the new response variable is calculated as ζ=a(netOD)n/ln(netOD). In this case, the total uncertainty in relative dose will be calculated by summing in quadrature uncertainties for a new response function (σζ) for a given and the reference point. Results: Except at very low doses, where the measurement uncertainty dominates, the total relative dose uncertainty is less than 1% for the linear response method as compared to almost 2% uncertainty level for the reference dosimetry method. The result is not surprising having in mind that the total uncertainty of the reference dose method is dominated by the fitting uncertainty, which is mitigated in the case of linearization method. Conclusion: Linearization of the radiochromic film dose response provides a convenient and a more precise method for relative dose measurements as it does not require reference dosimetry and creation of calibration curve. However, the linearity of the newly introduced function must be verified. Dave Lewis

  3. Delivered dose uncertainty analysis at the tumor apex for ocular brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Morrison, Hali, E-mail: hamorris@ualberta.ca; Menon, Geetha; Larocque, Matthew P.; Jans, Hans-Sonke; Sloboda, Ron S. [Department of Medical Physics, Cross Cancer Institute, Edmonton, Alberta T6G 1Z2, Canada and Department of Oncology, University of Alberta, Edmonton, Alberta T6G 2R3 (Canada); Weis, Ezekiel [Department of Ophthalmology, University of Alberta, Edmonton, Alberta T6G 2R3 (Canada)

    2016-08-15

    Purpose: To estimate the total dosimetric uncertainty at the tumor apex for ocular brachytherapy treatments delivered using 16 mm Collaborative Ocular Melanoma Study (COMS) and Super9 plaques loaded with {sup 125}I seeds in order to determine the size of the apex margin that would be required to ensure adequate dosimetric coverage of the tumor. Methods: The total dosimetric uncertainty was assessed for three reference tumor heights: 3, 5, and 10 mm, using the Guide to the expression of Uncertainty in Measurement/National Institute of Standards and Technology approach. Uncertainties pertaining to seed construction, source strength, plaque assembly, treatment planning calculations, tumor height measurement, plaque placement, and plaque tilt for a simple dome-shaped tumor were investigated and quantified to estimate the total dosimetric uncertainty at the tumor apex. Uncertainties in seed construction were determined using EBT3 Gafchromic film measurements around single seeds, plaque assembly uncertainties were determined using high resolution microCT scanning of loaded plaques to measure seed positions in the plaques, and all other uncertainties were determined from the previously published studies and recommended values. All dose calculations were performed using PLAQUESIMULATOR v5.7.6 ophthalmic treatment planning system with the inclusion of plaque heterogeneity corrections. Results: The total dosimetric uncertainties at 3, 5, and 10 mm tumor heights for the 16 mm COMS plaque were 17.3%, 16.1%, and 14.2%, respectively, and for the Super9 plaque were 18.2%, 14.4%, and 13.1%, respectively (all values with coverage factor k = 2). The apex margins at 3, 5, and 10 mm tumor heights required to adequately account for these uncertainties were 1.3, 1.3, and 1.4 mm, respectively, for the 16 mm COMS plaque, and 1.8, 1.4, and 1.2 mm, respectively, for the Super9 plaque. These uncertainties and associated margins are dependent on the dose gradient at the given prescription

  4. Quantification of the impact of precipitation spatial distribution uncertainty on predictive uncertainty of a snowmelt runoff model

    Science.gov (United States)

    Jacquin, A. P.

    2012-04-01

    This study is intended to quantify the impact of uncertainty about precipitation spatial distribution on predictive uncertainty of a snowmelt runoff model. This problem is especially relevant in mountain catchments with a sparse precipitation observation network and relative short precipitation records. The model analysed is a conceptual watershed model operating at a monthly time step. The model divides the catchment into five elevation zones, where the fifth zone corresponds to the catchment's glaciers. Precipitation amounts at each elevation zone i are estimated as the product between observed precipitation at a station and a precipitation factor FPi. If other precipitation data are not available, these precipitation factors must be adjusted during the calibration process and are thus seen as parameters of the model. In the case of the fifth zone, glaciers are seen as an inexhaustible source of water that melts when the snow cover is depleted.The catchment case study is Aconcagua River at Chacabuquito, located in the Andean region of Central Chile. The model's predictive uncertainty is measured in terms of the output variance of the mean squared error of the Box-Cox transformed discharge, the relative volumetric error, and the weighted average of snow water equivalent in the elevation zones at the end of the simulation period. Sobol's variance decomposition (SVD) method is used for assessing the impact of precipitation spatial distribution, represented by the precipitation factors FPi, on the models' predictive uncertainty. In the SVD method, the first order effect of a parameter (or group of parameters) indicates the fraction of predictive uncertainty that could be reduced if the true value of this parameter (or group) was known. Similarly, the total effect of a parameter (or group) measures the fraction of predictive uncertainty that would remain if the true value of this parameter (or group) was unknown, but all the remaining model parameters could be fixed

  5. Uncertainty Analyses and Strategy

    International Nuclear Information System (INIS)

    Kevin Coppersmith

    2001-01-01

    The DOE identified a variety of uncertainties, arising from different sources, during its assessment of the performance of a potential geologic repository at the Yucca Mountain site. In general, the number and detail of process models developed for the Yucca Mountain site, and the complex coupling among those models, make the direct incorporation of all uncertainties difficult. The DOE has addressed these issues in a number of ways using an approach to uncertainties that is focused on producing a defensible evaluation of the performance of a potential repository. The treatment of uncertainties oriented toward defensible assessments has led to analyses and models with so-called ''conservative'' assumptions and parameter bounds, where conservative implies lower performance than might be demonstrated with a more realistic representation. The varying maturity of the analyses and models, and uneven level of data availability, result in total system level analyses with a mix of realistic and conservative estimates (for both probabilistic representations and single values). That is, some inputs have realistically represented uncertainties, and others are conservatively estimated or bounded. However, this approach is consistent with the ''reasonable assurance'' approach to compliance demonstration, which was called for in the U.S. Nuclear Regulatory Commission's (NRC) proposed 10 CFR Part 63 regulation (64 FR 8640 [DIRS 101680]). A risk analysis that includes conservatism in the inputs will result in conservative risk estimates. Therefore, the approach taken for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) provides a reasonable representation of processes and conservatism for purposes of site recommendation. However, mixing unknown degrees of conservatism in models and parameter representations reduces the transparency of the analysis and makes the development of coherent and consistent probability statements about projected repository

  6. Experimental Evaluation of a Total Heat Recovery Unit with Polymer Membrane Foils

    DEFF Research Database (Denmark)

    Fang, Lei; Yuan, Shu; Nie, Jinzhe

    2014-01-01

    A laboratory experimental study was conducted to investigate the energy performance of a total heat recovery unit using a polymer membranes heat exchanger. The study was conducted in twin climate chambers. One of the chambers simulated outdoor climate conditions and the other simulated the climate...... condition indoors. The airflows taken from the two chambers were connected into the total heat recovery unit and exchange heat in a polymer membrane foil heat exchanger installed inside the unit. The temperature and humidity of the air upstream and downstream of the heat exchanger were measured. Based...... on the measured temperature and humidity values, the temperature, humidity, and enthalpy efficiencies of the total heat recovery unit were calculated. The experiment was conducted in different combinations of outdoor climate conditions simulating warm and humid outdoor climates and air-conditioned indoor climate...

  7. Validation uncertainty of MATRA code for subchannel void distributions

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Dae-Hyun; Kim, S. J.; Kwon, H.; Seo, K. W. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    To extend code capability to the whole core subchannel analysis, pre-conditioned Krylov matrix solvers such as BiCGSTAB and GMRES are implemented in MATRA code as well as parallel computing algorithms using MPI and OPENMP. It is coded by fortran 90, and has some user friendly features such as graphic user interface. MATRA code was approved by Korean regulation body for design calculation of integral-type PWR named SMART. The major role subchannel code is to evaluate core thermal margin through the hot channel analysis and uncertainty evaluation for CHF predictions. In addition, it is potentially used for the best estimation of core thermal hydraulic field by incorporating into multiphysics and/or multi-scale code systems. In this study we examined a validation process for the subchannel code MATRA specifically in the prediction of subchannel void distributions. The primary objective of validation is to estimate a range within which the simulation modeling error lies. The experimental data for subchannel void distributions at steady state and transient conditions was provided on the framework of OECD/NEA UAM benchmark program. The validation uncertainty of MATRA code was evaluated for a specific experimental condition by comparing the simulation result and experimental data. A validation process should be preceded by code and solution verification. However, quantification of verification uncertainty was not addressed in this study. The validation uncertainty of the MATRA code for predicting subchannel void distribution was evaluated for a single data point of void fraction measurement at a 5x5 PWR test bundle on the framework of OECD UAM benchmark program. The validation standard uncertainties were evaluated as 4.2%, 3.9%, and 2.8% with the Monte-Carlo approach at the axial levels of 2216 mm, 2669 mm, and 3177 mm, respectively. The sensitivity coefficient approach revealed similar results of uncertainties but did not account for the nonlinear effects on the

  8. Modeling Uncertainty in Climate Change: A Multi-Model Comparison

    Energy Technology Data Exchange (ETDEWEB)

    Gillingham, Kenneth; Nordhaus, William; Anthoff, David; Blanford, Geoffrey J.; Bosetti, Valentina; Christensen, Peter; McJeon, Haewon C.; Reilly, J. M.; Sztorc, Paul

    2015-10-01

    The economics of climate change involves a vast array of uncertainties, complicating both the analysis and development of climate policy. This study presents the results of the first comprehensive study of uncertainty in climate change using multiple integrated assessment models. The study looks at model and parametric uncertainties for population, total factor productivity, and climate sensitivity and estimates the pdfs of key output variables, including CO2 concentrations, temperature, damages, and the social cost of carbon (SCC). One key finding is that parametric uncertainty is more important than uncertainty in model structure. Our resulting pdfs also provide insight on tail events.

  9. OR14-V-Uncertainty-PD2La Uncertainty Quantification for Nuclear Safeguards and Nondestructive Assay Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Nicholson, Andrew D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Croft, Stephen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); McElroy, Robert Dennis [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-08-01

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically provide error bars and also partition total uncertainty into “random” and “systematic” components so that, for example, an error bar can be developed for the total mass estimate in multiple items. Uncertainty Quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed and achievable using modern statistical methods.

  10. A real-time assessment of measurement uncertainty in the experimental characterization of sprays

    International Nuclear Information System (INIS)

    Panão, M R O; Moreira, A L N

    2008-01-01

    This work addresses the estimation of the measurement uncertainty of discrete probability distributions used in the characterization of sprays. A real-time assessment of this measurement uncertainty is further investigated, particularly concerning the informative quality of the measured distribution and the influence of acquiring additional information on the knowledge retrieved from statistical analysis. The informative quality is associated with the entropy concept as understood in information theory (Shannon entropy), normalized by the entropy of the most informative experiment. A new empirical correlation is derived between the error accuracy of a discrete cumulative probability distribution and the normalized Shannon entropy. The results include case studies using: (i) spray impingement measurements to study the applicability of the real-time assessment of measurement uncertainty, and (ii) the simulation of discrete probability distributions of unknown shape or function to test the applicability of the new correlation

  11. Efeito do período experimental sobre a digestão parcial e total em bovinos alimentados com dois níveis de volumosos Effect of experimental periods on the parcial and total digestion in cattle fed two levels of forage

    Directory of Open Access Journals (Sweden)

    Kátia Cylene Guimarães

    2001-06-01

    Full Text Available Os objetivos deste trabalho foram avaliar os efeitos de dois períodos experimentais e dois níveis de volumosos na dieta sobre a digestibilidade total e parcial da matéria seca (MS, matéria orgânica (MO, proteína bruta (PB, fibra em detergente ácido (FDA, fibra em detergente neutro (FDN, energia bruta (EB e amido. Foram utilizados quatro novilhos da raça Holandês Preto e Branco, com dois anos de idade e 340 kg de peso vivo e canulados no rúmen e duodeno. O delineamento experimental utilizado foi o quadrado latino 4 x 4, em que os animais receberam quatro tratamentos, que consistiram de dois níveis de volumosos (30 e 70% e dois períodos experimentais (14 e 21 dias. Houve efeito significativo do nível de volumoso sobre a digestibilidade ruminal de FDA e FDN, a digestibilidade intestinal e total da MS, MO, PB e amido e a digestibilidade total da EB. Não houve efeito do período experimental sobre os coeficientes de digestibilidade dos nutrientes avaliados. Conclui-se que a utilização de período experimental de 14 dias, em experimentos de digestão, é viável, quando se utiliza feno como fonte de volumoso.The objective of this research was to evaluate the effects of two experimental periods and two forage levels in the diet on the total and parcial apparent digestibility of dry matter (DM, organic matter (OM, crude protein (CP, acid detergent fiber (ADF, neutral detergent fiber (NDF, gross energy (GE and starch. Four Holstein steers, averaging two years old and 340 kg of body weight, ruminally and duodenally cannulated, were used. The experimental design was a 4 x 4 latin square and the animals received four treatments as following: two forage levels (30 and 70% and two experimental periods (14 and 21 days. There was effect of forage level on intestinal and total digestibility of DM, OM, CP and starch and on total digestibility of GE. There was effect of forage level on ruminal digestibility of ADF and NDF. There was no effect of

  12. Propagation of dynamic measurement uncertainty

    International Nuclear Information System (INIS)

    Hessling, J P

    2011-01-01

    The time-dependent measurement uncertainty has been evaluated in a number of recent publications, starting from a known uncertain dynamic model. This could be defined as the 'downward' propagation of uncertainty from the model to the targeted measurement. The propagation of uncertainty 'upward' from the calibration experiment to a dynamic model traditionally belongs to system identification. The use of different representations (time, frequency, etc) is ubiquitous in dynamic measurement analyses. An expression of uncertainty in dynamic measurements is formulated for the first time in this paper independent of representation, joining upward as well as downward propagation. For applications in metrology, the high quality of the characterization may be prohibitive for any reasonably large and robust model to pass the whiteness test. This test is therefore relaxed by not directly requiring small systematic model errors in comparison to the randomness of the characterization. Instead, the systematic error of the dynamic model is propagated to the uncertainty of the measurand, analogously but differently to how stochastic contributions are propagated. The pass criterion of the model is thereby transferred from the identification to acceptance of the total accumulated uncertainty of the measurand. This increases the relevance of the test of the model as it relates to its final use rather than the quality of the calibration. The propagation of uncertainty hence includes the propagation of systematic model errors. For illustration, the 'upward' propagation of uncertainty is applied to determine if an appliance box is damaged in an earthquake experiment. In this case, relaxation of the whiteness test was required to reach a conclusive result

  13. Uncertainty Analysis of RBMK-Related Experimental Data

    International Nuclear Information System (INIS)

    Urbonas, Rolandas; Kaliatka, Algirdas; Liaukonis, Mindaugas

    2002-01-01

    An attempt to validate state-of-the-art thermal hydraulic code ATHLET (GRS, Germany) on the basis of E-108 test facility was made. Originally this code was developed and validated for different type reactors than RBMK. Since state-of-art thermal hydraulic codes are widely used for simulation of RBMK reactors, further codes' implementation and validation is required. The phenomena associated with channel type flow instabilities and CHF were found to be an important step in the frame of the overall effort of state-of-the-art validation and application for RBMK reactors. In the paper one-channel approach analysis is presented. Thus, the oscillatory behaviour of the system was not detected. The results show dependence on the nodalization used in the heated channels, initial and boundary conditions and code selected models. It is shown that the code is able to predict a sudden heat structure temperature excursion, when critical heat flux is approached. GRS developed uncertainty and sensitivity methodology was employed in the analysis. (authors)

  14. Towards a quantitative, measurement-based estimate of the uncertainty in photon mass attenuation coefficients at radiation therapy energies

    Science.gov (United States)

    Ali, E. S. M.; Spencer, B.; McEwen, M. R.; Rogers, D. W. O.

    2015-02-01

    In this study, a quantitative estimate is derived for the uncertainty in the XCOM photon mass attenuation coefficients in the energy range of interest to external beam radiation therapy—i.e. 100 keV (orthovoltage) to 25 MeV—using direct comparisons of experimental data against Monte Carlo models and theoretical XCOM data. Two independent datasets are used. The first dataset is from our recent transmission measurements and the corresponding EGSnrc calculations (Ali et al 2012 Med. Phys. 39 5990-6003) for 10-30 MV photon beams from the research linac at the National Research Council Canada. The attenuators are graphite and lead, with a total of 140 data points and an experimental uncertainty of ˜0.5% (k = 1). An optimum energy-independent cross section scaling factor that minimizes the discrepancies between measurements and calculations is used to deduce cross section uncertainty. The second dataset is from the aggregate of cross section measurements in the literature for graphite and lead (49 experiments, 288 data points). The dataset is compared to the sum of the XCOM data plus the IAEA photonuclear data. Again, an optimum energy-independent cross section scaling factor is used to deduce the cross section uncertainty. Using the average result from the two datasets, the energy-independent cross section uncertainty estimate is 0.5% (68% confidence) and 0.7% (95% confidence). The potential for energy-dependent errors is discussed. Photon cross section uncertainty is shown to be smaller than the current qualitative ‘envelope of uncertainty’ of the order of 1-2%, as given by Hubbell (1999 Phys. Med. Biol 44 R1-22).

  15. Uncertainty Quantification of Composite Laminate Damage with the Generalized Information Theory

    Energy Technology Data Exchange (ETDEWEB)

    J. Lucero; F. Hemez; T. Ross; K.Kline; J.Hundhausen; T. Tippetts

    2006-05-01

    This work presents a survey of five theories to assess the uncertainty of projectile impact induced damage on multi-layered carbon-epoxy composite plates. Because the types of uncertainty dealt with in this application are multiple (variability, ambiguity, and conflict) and because the data sets collected are sparse, characterizing the amount of delamination damage with probability theory alone is possible but incomplete. This motivates the exploration of methods contained within a broad Generalized Information Theory (GIT) that rely on less restrictive assumptions than probability theory. Probability, fuzzy sets, possibility, and imprecise probability (probability boxes (p-boxes) and Dempster-Shafer) are used to assess the uncertainty in composite plate damage. Furthermore, this work highlights the usefulness of each theory. The purpose of the study is not to compare directly the different GIT methods but to show that they can be deployed on a practical application and to compare the assumptions upon which these theories are based. The data sets consist of experimental measurements and finite element predictions of the amount of delamination and fiber splitting damage as multilayered composite plates are impacted by a projectile at various velocities. The physical experiments consist of using a gas gun to impact suspended plates with a projectile accelerated to prescribed velocities, then, taking ultrasound images of the resulting delamination. The nonlinear, multiple length-scale numerical simulations couple local crack propagation implemented through cohesive zone modeling to global stress-displacement finite element analysis. The assessment of damage uncertainty is performed in three steps by, first, considering the test data only; then, considering the simulation data only; finally, performing an assessment of total uncertainty where test and simulation data sets are combined. This study leads to practical recommendations for reducing the uncertainty and

  16. The validation of evacuation simulation models through the analysis of behavioural uncertainty

    International Nuclear Information System (INIS)

    Lovreglio, Ruggiero; Ronchi, Enrico; Borri, Dino

    2014-01-01

    Both experimental and simulation data on fire evacuation are influenced by a component of uncertainty caused by the impact of the unexplained variance in human behaviour, namely behavioural uncertainty (BU). Evacuation model validation studies should include the study of this type of uncertainty during the comparison of experiments and simulation results. An evacuation model validation procedure is introduced in this paper to study the impact of BU. This methodology is presented through a case study for the comparison between repeated experimental data and simulation results produced by FDS+Evac, an evacuation model for the simulation of human behaviour in fire, which makes use of distribution laws. - Highlights: • Validation of evacuation models is investigated. • Quantitative evaluation of behavioural uncertainty is performed. • A validation procedure is presented through an evacuation case study

  17. Understanding and reducing statistical uncertainties in nebular abundance determinations

    Science.gov (United States)

    Wesson, R.; Stock, D. J.; Scicluna, P.

    2012-06-01

    Whenever observations are compared to theories, an estimate of the uncertainties associated with the observations is vital if the comparison is to be meaningful. However, many or even most determinations of temperatures, densities and abundances in photoionized nebulae do not quote the associated uncertainty. Those that do typically propagate the uncertainties using analytical techniques which rely on assumptions that generally do not hold. Motivated by this issue, we have developed Nebular Empirical Analysis Tool (NEAT), a new code for calculating chemical abundances in photoionized nebulae. The code carries out a standard analysis of lists of emission lines using long-established techniques to estimate the amount of interstellar extinction, calculate representative temperatures and densities, compute ionic abundances from both collisionally excited lines and recombination lines, and finally to estimate total elemental abundances using an ionization correction scheme. NEATuses a Monte Carlo technique to robustly propagate uncertainties from line flux measurements through to the derived abundances. We show that, for typical observational data, this approach is superior to analytic estimates of uncertainties. NEAT also accounts for the effect of upward biasing on measurements of lines with low signal-to-noise ratio, allowing us to accurately quantify the effect of this bias on abundance determinations. We find not only that the effect can result in significant overestimates of heavy element abundances derived from weak lines, but also that taking it into account reduces the uncertainty of these abundance determinations. Finally, we investigate the effect of possible uncertainties in R, the ratio of selective-to-total extinction, on abundance determinations. We find that the uncertainty due to this parameter is negligible compared to the statistical uncertainties due to typical line flux measurement uncertainties.

  18. The uncertainties in estimating measurement uncertainties

    International Nuclear Information System (INIS)

    Clark, J.P.; Shull, A.H.

    1994-01-01

    All measurements include some error. Whether measurements are used for accountability, environmental programs or process support, they are of little value unless accompanied by an estimate of the measurements uncertainty. This fact is often overlooked by the individuals who need measurements to make decisions. This paper will discuss the concepts of measurement, measurements errors (accuracy or bias and precision or random error), physical and error models, measurement control programs, examples of measurement uncertainty, and uncertainty as related to measurement quality. Measurements are comparisons of unknowns to knowns, estimates of some true value plus uncertainty; and are no better than the standards to which they are compared. Direct comparisons of unknowns that match the composition of known standards will normally have small uncertainties. In the real world, measurements usually involve indirect comparisons of significantly different materials (e.g., measuring a physical property of a chemical element in a sample having a matrix that is significantly different from calibration standards matrix). Consequently, there are many sources of error involved in measurement processes that can affect the quality of a measurement and its associated uncertainty. How the uncertainty estimates are determined and what they mean is as important as the measurement. The process of calculating the uncertainty of a measurement itself has uncertainties that must be handled correctly. Examples of chemistry laboratory measurement will be reviewed in this report and recommendations made for improving measurement uncertainties

  19. Estimating and managing uncertainties in order to detect terrestrial greenhouse gas removals

    International Nuclear Information System (INIS)

    Rypdal, Kristin; Baritz, Rainer

    2002-01-01

    Inventories of emissions and removals of greenhouse gases will be used under the United Nations Framework Convention on Climate Change and the Kyoto Protocol to demonstrate compliance with obligations. During the negotiation process of the Kyoto Protocol it has been a concern that uptake of carbon in forest sinks can be difficult to verify. The reason for large uncertainties are high temporal and spatial variability and lack of representative estimation parameters. Additional uncertainties will be a consequence of definitions made in the Kyoto Protocol reporting. In the Nordic countries the national forest inventories will be very useful to estimate changes in carbon stocks. The main uncertainty lies in the conversion from changes in tradable timber to changes in total carbon biomass. The uncertainties in the emissions of the non-CO 2 carbon from forest soils are particularly high. On the other hand the removals reported under the Kyoto Protocol will only be a fraction of the total uptake and are not expected to constitute a high share of the total inventory. It is also expected that the Nordic countries will be able to implement a high tier methodology. As a consequence total uncertainties may not be extremely high. (Author)

  20. Uncertainty analysis in Monte Carlo criticality computations

    International Nuclear Information System (INIS)

    Qi Ao

    2011-01-01

    Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.

  1. Sensitivity and uncertainty analysis for fission product decay heat calculations

    International Nuclear Information System (INIS)

    Rebah, J.; Lee, Y.K.; Nimal, J.C.; Nimal, B.; Luneville, L.; Duchemin, B.

    1994-01-01

    The calculated uncertainty in decay heat due to the uncertainty in basic nuclear data given in the CEA86 Library, is presented. Uncertainties in summation calculation arise from several sources: fission product yields, half-lives and average decay energies. The correlation between basic data is taken into account. The uncertainty analysis were obtained for thermal-neutron-induced fission of U235 and Pu239 in the case of burst fission and irradiation time. The calculated decay heat in this study is compared with experimental results and with new calculation using the JEF2 Library. (from authors) 6 figs., 19 refs

  2. Forensic Uncertainty Quantification of Explosive Dispersal of Particles

    Science.gov (United States)

    Hughes, Kyle; Park, Chanyoung; Haftka, Raphael; Kim, Nam-Ho

    2017-06-01

    In addition to the numerical challenges of simulating the explosive dispersal of particles, validation of the simulation is often plagued with poor knowledge of the experimental conditions. The level of experimental detail required for validation is beyond what is usually included in the literature. This presentation proposes the use of forensic uncertainty quantification (UQ) to investigate validation-quality experiments to discover possible sources of uncertainty that may have been missed in initial design of experiments or under-reported. The current experience of the authors has found that by making an analogy to crime scene investigation when looking at validation experiments, valuable insights may be gained. One examines all the data and documentation provided by the validation experimentalists, corroborates evidence, and quantifies large sources of uncertainty a posteriori with empirical measurements. In addition, it is proposed that forensic UQ may benefit from an independent investigator to help remove possible implicit biases and increases the likelihood of discovering unrecognized uncertainty. Forensic UQ concepts will be discussed and then applied to a set of validation experiments performed at Eglin Air Force Base. This work was supported in part by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program.

  3. LOFT experimental measurements uncertainty analyses. Volume XX. Fluid-velocity measurement using pulsed-neutron activation

    International Nuclear Information System (INIS)

    Lassahn, G.D.; Taylor, D.J.N.

    1982-08-01

    Analyses of uncertainty components inherent in pulsed-neutron-activation (PNA) measurements in general and the Loss-of-Fluid-Test (LOFT) system in particular are given. Due to the LOFT system's unique conditions, previously-used techniques were modified to make the volocity measurement. These methods render a useful, cost-effective measurement with an estimated uncertainty of 11% of reading

  4. Estimates of bias and uncertainty in recorded external dose

    International Nuclear Information System (INIS)

    Fix, J.J.; Gilbert, E.S.; Baumgartner, W.V.

    1994-10-01

    A study is underway to develop an approach to quantify bias and uncertainty in recorded dose estimates for workers at the Hanford Site based on personnel dosimeter results. This paper focuses on selected experimental studies conducted to better define response characteristics of Hanford dosimeters. The study is more extensive than the experimental studies presented in this paper and includes detailed consideration and evaluation of other sources of bias and uncertainty. Hanford worker dose estimates are used in epidemiologic studies of nuclear workers. A major objective of these studies is to provide a direct assessment of the carcinogenic risk of exposure to ionizing radiation at low doses and dose rates. Considerations of bias and uncertainty in the recorded dose estimates are important in the conduct of this work. The method developed for use with Hanford workers can be considered an elaboration of the approach used to quantify bias and uncertainty in estimated doses for personnel exposed to radiation as a result of atmospheric testing of nuclear weapons between 1945 and 1962. This approach was first developed by a National Research Council (NRC) committee examining uncertainty in recorded film badge doses during atmospheric tests (NRC 1989). It involved quantifying both bias and uncertainty from three sources (i.e., laboratory, radiological, and environmental) and then combining them to obtain an overall assessment. Sources of uncertainty have been evaluated for each of three specific Hanford dosimetry systems (i.e., the Hanford two-element film dosimeter, 1944-1956; the Hanford multi-element film dosimeter, 1957-1971; and the Hanford multi-element TLD, 1972-1993) used to estimate personnel dose throughout the history of Hanford operations. Laboratory, radiological, and environmental sources of bias and uncertainty have been estimated based on historical documentation and, for angular response, on selected laboratory measurements

  5. Expanded and combined uncertainty in measurements by GM counters

    International Nuclear Information System (INIS)

    Stankovic, K.; Arandjic, D.; Lazarevic, Dj.; Osmokrovic, P.

    2007-01-01

    This paper deals with possible ways of obtaining expanded and combined uncertainty in measurements for four types of GM counters with a same counter's tube, in cases when the contributors of these uncertainties are cosmic background radiation and induced overvoltage phenomena. Nowadays, as a consequence of electromagnetic radiation, the latter phenomenon is especially marked in urban environments. Based on experimental results obtained, it has been established that the uncertainties of an influenced random variable 'number of pulses from background radiation' and 'number of pulses induced by overvoltage' depend on the technological solution of the counter's reading system and contribute in different ways to the expanded and combined uncertainty in measurements of the applied types of GM counters. (author)

  6. Uncertainty and equipoise: at interplay between epistemology, decision making and ethics.

    Science.gov (United States)

    Djulbegovic, Benjamin

    2011-10-01

    In recent years, various authors have proposed that the concept of equipoise be abandoned because it conflates the practice of clinical care with clinical research. At the same time, the equipoise opponents acknowledge the necessity of clinical research if there are unresolved uncertainties about the effects of proposed healthcare interventions. As equipoise represents just 1 measure of uncertainty, proposals to abandon equipoise while maintaining a requirement for addressing uncertainties are contradictory and ultimately not valid. As acknowledgment and articulation of uncertainties represent key scientific and moral requirements for human experimentation, the concept of equipoise remains the most useful framework to link the theory of human experimentation with the theory of rational choice. In this article, I show how uncertainty (equipoise) is at the intersection between epistemology, decision making and ethics of clinical research. In particular, I show how our formulation of responses to uncertainties of hoped-for benefits and unknown harms of testing is a function of the way humans cognitively process information. This approach is based on the view that considerations of ethics and rationality cannot be separated. I analyze the response to uncertainties as it relates to the dual-processing theory, which postulates that rational approach to (clinical research) decision making depends both on analytical, deliberative processes embodied in scientific method (system II), and good human intuition (system I). Ultimately, our choices can only become wiser if we understand a close and intertwined relationship between irreducible uncertainty, inevitable errors and unavoidable injustice.

  7. Uncertainty Analysis of the Temperature–Resistance Relationship of Temperature Sensing Fabric

    Directory of Open Access Journals (Sweden)

    Muhammad Dawood Husain

    2016-11-01

    Full Text Available This paper reports the uncertainty analysis of the temperature–resistance (TR data of the newly developed temperature sensing fabric (TSF, which is a double-layer knitted structure fabricated on an electronic flat-bed knitting machine, made of polyester as a basal yarn, and embedded with fine metallic wire as sensing element. The measurement principle of the TSF is identical to temperature resistance detector (RTD; that is, change in resistance due to change in temperature. The regression uncertainty (uncertainty within repeats and repeatability uncertainty (uncertainty among repeats were estimated by analysing more than 300 TR experimental repeats of 50 TSF samples. The experiments were performed under dynamic heating and cooling environments on a purpose-built test rig within the temperature range of 20–50 °C. The continuous experimental data was recorded through LabVIEW-based graphical user interface. The result showed that temperature and resistance values were not only repeatable but reproducible, with only minor variations. The regression uncertainty was found to be less than ±0.3 °C; the TSF sample made of Ni and W wires showed regression uncertainty of <±0.13 °C in comparison to Cu-based TSF samples (>±0.18 °C. The cooling TR data showed considerably reduced values (±0.07 °C of uncertainty in comparison with the heating TR data (±0.24 °C. The repeatability uncertainty was found to be less than ±0.5 °C. By increasing the number of samples and repeats, the uncertainties may be reduced further. The TSF could be used for continuous measurement of the temperature profile on the surface of the human body.

  8. Evaluation of uncertainty of adaptive radiation therapy

    International Nuclear Information System (INIS)

    Garcia Molla, R.; Gomez Martin, C.; Vidueira, L.; Juan-Senabre, X.; Garcia Gomez, R.

    2013-01-01

    This work is part of tests to perform to its acceptance in the clinical practice. The uncertainties of adaptive radiation, and which will separate the study, can be divided into two large parts: dosimetry in the CBCT and RDI. At each stage, their uncertainties are quantified and a level of action from which it would be reasonable to adapt the plan may be obtained with the total. (Author)

  9. Extension of EU Emissions Trading Scheme to Other Sectors and Gases: Consequences for Uncertainty of Total Tradable Amount

    International Nuclear Information System (INIS)

    Monni, S.; Syri, S.; Pipatti, R.; Savolainen, I.

    2007-01-01

    Emissions trading in the European Union (EU), covering the least uncertain emission sources of greenhouse gas emission inventories (CO 2 from combustion and selected industrial processes in large installations), began in 2005. During the first commitment period of the Kyoto Protocol (2008-2012), the emissions trading between Parties to the Protocol will cover all greenhouse gases (CO 2 , CH 4 , N 2 O, HFCs, PFCs, and SF 6 ) and sectors (energy, industry, agriculture, waste, and selected land-use activities) included in the Protocol. In this paper, we estimate the uncertainties in different emissions trading schemes based on uncertainties in corresponding inventories. According to the results, uncertainty in emissions from the EU15 and the EU25 included in the first phase of the EU emissions trading scheme (2005-2007) is ±3% (at 95% confidence interval relative to the mean value). If the trading were extended to CH 4 and N 2 O, in addition to CO 2 , but no new emissions sectors were included, the tradable amount of emissions would increase by only 2% and the uncertainty in the emissions would range from -4 to +8%. Finally, uncertainty in emissions included in emissions trading under the Kyoto Protocol was estimated to vary from -6 to +21%. Inclusion of removals from forest-related activities under the Kyoto Protocol did not notably affect uncertainty, as the volume of these removals is estimated to be small

  10. Influence of conformity on the wear of total knee replacement: An experimental study.

    Science.gov (United States)

    Brockett, Claire L; Carbone, Silvia; Fisher, John; Jennings, Louise M

    2018-02-01

    Wear of total knee replacement continues to be a significant factor influencing the clinical longevity of implants. Historically, failure due to delamination and fatigue directed design towards more conforming inserts to reduce contact stress. As new generations of more oxidatively stable polyethylene have been developed, more flexibility in bearing design has been introduced. The aim of this study was to investigate the effect of insert conformity on the wear performance of a fixed bearing total knee replacement through experimental simulation. Two geometries of insert were studied under standard gait conditions. There was a significant reduction in wear with reducing implant conformity. This study has demonstrated that bearing conformity has a significant impact on the wear performance of a fixed bearing total knee replacement, providing opportunities to improve clinical performance through enhanced material and design selection.

  11. Uncertainty Estimates: A New Editorial Standard

    International Nuclear Information System (INIS)

    Drake, Gordon W.F.

    2014-01-01

    Full text: The objective of achieving higher standards for uncertainty estimates in the publication of theoretical data for atoms and molecules requires a concerted effort by both the authors of papers and the editors who send them out for peer review. In April, 2011, the editors of Physical Review A published an Editorial announcing a new standard that uncertainty estimates would be required whenever practicable, and in particular in the following circumstances: 1. If the authors claim high accuracy, or improvements on the accuracy of previous work. 2. If the primary motivation for the paper is to make comparisons with present or future high precision experimental measurements. 3. If the primary motivation is to provide interpolations or extrapolations of known experimental measurements. The new policy means that papers that do not meet these standards are not sent out for peer review until they have been suitably revised, and the authors are so notified immediately upon receipt. The policy has now been in effect for three years. (author

  12. Uncertainty in the inelastic resonant scattering assisted by phonons

    International Nuclear Information System (INIS)

    Garcia, N.; Garcia-Sanz, J.; Solana, J.

    1977-01-01

    We have analyzed the inelastic minima observed in new results of He atoms scattered from LiF(001) surfaces. This is done considering bound state resonance processes assisted by phonons. The analysis presents large uncertainties. In the range of uncertainty, we find two ''possible'' bands associated with the vibrations of F - and Li + , respectively. Many more experimental data are necessary to confirm the existence of these processes

  13. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    Energy Technology Data Exchange (ETDEWEB)

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  14. Gaussian process regression for sensor networks under localization uncertainty

    Science.gov (United States)

    Jadaliha, M.; Xu, Yunfei; Choi, Jongeun; Johnson, N.S.; Li, Weiming

    2013-01-01

    In this paper, we formulate Gaussian process regression with observations under the localization uncertainty due to the resource-constrained sensor networks. In our formulation, effects of observations, measurement noise, localization uncertainty, and prior distributions are all correctly incorporated in the posterior predictive statistics. The analytically intractable posterior predictive statistics are proposed to be approximated by two techniques, viz., Monte Carlo sampling and Laplace's method. Such approximation techniques have been carefully tailored to our problems and their approximation error and complexity are analyzed. Simulation study demonstrates that the proposed approaches perform much better than approaches without considering the localization uncertainty properly. Finally, we have applied the proposed approaches on the experimentally collected real data from a dye concentration field over a section of a river and a temperature field of an outdoor swimming pool to provide proof of concept tests and evaluate the proposed schemes in real situations. In both simulation and experimental results, the proposed methods outperform the quick-and-dirty solutions often used in practice.

  15. Nuclear data sensitivity/uncertainty analysis for XT-ADS

    International Nuclear Information System (INIS)

    Sugawara, Takanori; Sarotto, Massimo; Stankovskiy, Alexey; Van den Eynde, Gert

    2011-01-01

    Highlights: → The sensitivity and uncertainty analyses were performed to comprehend the reliability of the XT-ADS neutronic design. → The uncertainties deduced from the covariance data for the XT-ADS criticality were 0.94%, 1.9% and 1.1% by the SCALE 44-group, TENDL-2009 and JENDL-3.3 data, respectively. → When the target accuracy of 0.3%Δk for the criticality was considered, the uncertainties did not satisfy it. → To achieve this accuracy, the uncertainties should be improved by experiments under an adequate condition. - Abstract: The XT-ADS, an accelerator-driven system for an experimental demonstration, has been investigated in the framework of IP EUROTRANS FP6 project. In this study, the sensitivity and uncertainty analyses were performed to comprehend the reliability of the XT-ADS neutronic design. For the sensitivity analysis, it was found that the sensitivity coefficients were significantly different by changing the geometry models and calculation codes. For the uncertainty analysis, it was confirmed that the uncertainties deduced from the covariance data varied significantly by changing them. The uncertainties deduced from the covariance data for the XT-ADS criticality were 0.94%, 1.9% and 1.1% by the SCALE 44-group, TENDL-2009 and JENDL-3.3 data, respectively. When the target accuracy of 0.3%Δk for the criticality was considered, the uncertainties did not satisfy it. To achieve this accuracy, the uncertainties should be improved by experiments under an adequate condition.

  16. Uncertainty analysis of energy consumption in dwellings

    Energy Technology Data Exchange (ETDEWEB)

    Pettersen, Trine Dyrstad

    1997-12-31

    This thesis presents a comprehensive study of an energy estimation model that can be used to examine the uncertainty of predicted energy consumption in a dwelling. The variation and uncertainty of input parameters due to the outdoor climate, the building construction and the inhabitants are studied as a basis for further energy evaluations. The occurring variations of energy consumption in nominal similar dwellings are also investigated due to verification of the simulated energy consumption. The main topics are (1) a study of expected variations and uncertainties in both input parameters used in energy consumption calculations and the energy consumption in the dwelling, (2) the development and evaluation of a simplified energy calculation model that considers uncertainties due to the input parameters, (3) an evaluation of the influence of the uncertain parameters on the total variation so that the most important parameters can be identified, and (4) the recommendation of a simplified procedure for treating uncertainties or possible deviations from average conditions. 90 refs., 182 figs., 73 tabs.

  17. Uncertainty propagation through dynamic models of assemblies of mechanical structures

    International Nuclear Information System (INIS)

    Daouk, Sami

    2016-01-01

    When studying the behaviour of mechanical systems, mathematical models and structural parameters are usually considered deterministic. Return on experience shows however that these elements are uncertain in most cases, due to natural variability or lack of knowledge. Therefore, quantifying the quality and reliability of the numerical model of an industrial assembly remains a major question in low-frequency dynamics. The purpose of this thesis is to improve the vibratory design of bolted assemblies through setting up a dynamic connector model that takes account of different types and sources of uncertainty on stiffness parameters, in a simple, efficient and exploitable in industrial context. This work has been carried out in the framework of the SICODYN project, led by EDF R and D, that aims to characterise and quantify, numerically and experimentally, the uncertainties in the dynamic behaviour of bolted industrial assemblies. Comparative studies of several numerical methods of uncertainty propagation demonstrate the advantage of using the Lack-Of-Knowledge theory. An experimental characterisation of uncertainties in bolted structures is performed on a dynamic test rig and on an industrial assembly. The propagation of many small and large uncertainties through different dynamic models of mechanical assemblies leads to the assessment of the efficiency of the Lack-Of-Knowledge theory and its applicability in an industrial environment. (author)

  18. Conclusions on measurement uncertainty in microbiology.

    Science.gov (United States)

    Forster, Lynne I

    2009-01-01

    Since its first issue in 1999, testing laboratories wishing to comply with all the requirements of ISO/IEC 17025 have been collecting data for estimating uncertainty of measurement for quantitative determinations. In the microbiological field of testing, some debate has arisen as to whether uncertainty needs to be estimated for each method performed in the laboratory for each type of sample matrix tested. Queries also arise concerning the estimation of uncertainty when plate/membrane filter colony counts are below recommended method counting range limits. A selection of water samples (with low to high contamination) was tested in replicate with the associated uncertainty of measurement being estimated from the analytical results obtained. The analyses performed on the water samples included total coliforms, fecal coliforms, fecal streptococci by membrane filtration, and heterotrophic plate counts by the pour plate technique. For those samples where plate/membrane filter colony counts were > or =20, uncertainty estimates at a 95% confidence level were very similar for the methods, being estimated as 0.13, 0.14, 0.14, and 0.12, respectively. For those samples where plate/membrane filter colony counts were <20, estimated uncertainty values for each sample showed close agreement with published confidence limits established using a Poisson distribution approach.

  19. Quantification of uncertainties of modeling and simulation

    International Nuclear Information System (INIS)

    Ma Zhibo; Yin Jianwei

    2012-01-01

    The principles of Modeling and Simulation (M and S) is interpreted by a functional relation, from which the total uncertainties of M and S are identified and sorted to three parts considered to vary along with the conceptual models' parameters. According to the idea of verification and validation, the space of the parameters is parted to verified and applied domains, uncertainties in the verified domain are quantified by comparison between numerical and standard results, and those in the applied domain are quantified by a newly developed extrapolating method. Examples are presented to demonstrate and qualify the ideas aimed to build a framework to quantify the uncertainties of M and S. (authors)

  20. Estimating and managing uncertainties in order to detect terrestrial greenhouse gas removals

    Energy Technology Data Exchange (ETDEWEB)

    Rypdal, Kristin; Baritz, Rainer

    2002-07-01

    Inventories of emissions and removals of greenhouse gases will be used under the United Nations Framework Convention on Climate Change and the Kyoto Protocol to demonstrate compliance with obligations. During the negotiation process of the Kyoto Protocol it has been a concern that uptake of carbon in forest sinks can be difficult to verify. The reason for large uncertainties are high temporal and spatial variability and lack of representative estimation parameters. Additional uncertainties will be a consequence of definitions made in the Kyoto Protocol reporting. In the Nordic countries the national forest inventories will be very useful to estimate changes in carbon stocks. The main uncertainty lies in the conversion from changes in tradable timber to changes in total carbon biomass. The uncertainties in the emissions of the non-CO{sub 2} carbon from forest soils are particularly high. On the other hand the removals reported under the Kyoto Protocol will only be a fraction of the total uptake and are not expected to constitute a high share of the total inventory. It is also expected that the Nordic countries will be able to implement a high tier methodology. As a consequence total uncertainties may not be extremely high. (Author)

  1. Strain gauge measurement uncertainties on hydraulic turbine runner blade

    International Nuclear Information System (INIS)

    Arpin-Pont, J; Gagnon, M; Tahan, S A; Coutu, A; Thibault, D

    2012-01-01

    Strains experimentally measured with strain gauges can differ from those evaluated using the Finite Element (FE) method. This difference is due mainly to the assumptions and uncertainties inherent to each method. To circumvent this difficulty, we developed a numerical method based on Monte Carlo simulations to evaluate measurement uncertainties produced by the behaviour of a unidirectional welded gauge, its position uncertainty and its integration effect. This numerical method uses the displacement fields of the studied part evaluated by an FE analysis. The paper presents a study case using in situ data measured on a hydraulic turbine runner. The FE analysis of the turbine runner blade was computed, and our numerical method used to evaluate uncertainties on strains measured at five locations with welded strain gauges. Then, measured strains and their uncertainty ranges are compared to the estimated strains. The uncertainty ranges obtained extended from 74 με to 165 με. Furthermore, the biases observed between the median of the uncertainty ranges and the FE strains varied from −36 to 36 με. Note that strain gauge measurement uncertainties depend mainly on displacement fields and gauge geometry.

  2. Supporting Qualified Database for Uncertainty Evaluation

    International Nuclear Information System (INIS)

    Petruzzi, A.; Fiori, F.; Kovtonyuk, A.; Lisovyy, O.; D'Auria, F.

    2013-01-01

    Uncertainty evaluation constitutes a key feature of BEPU (Best Estimate Plus Uncertainty) process. The uncertainty can be the result of a Monte Carlo type analysis involving input uncertainty parameters or the outcome of a process involving the use of experimental data and connected code calculations. Those uncertainty methods are discussed in several papers and guidelines (IAEA-SRS-52, OECD/NEA BEMUSE reports). The present paper aims at discussing the role and the depth of the analysis required for merging from one side suitable experimental data and on the other side qualified code calculation results. This aspect is mostly connected with the second approach for uncertainty mentioned above, but it can be used also in the framework of the first approach. Namely, the paper discusses the features and structure of the database that includes the following kinds of documents: 1. The 'RDS-facility' (Reference Data Set for the selected facility): this includes the description of the facility, the geometrical characterization of any component of the facility, the instrumentations, the data acquisition system, the evaluation of pressure losses, the physical properties of the material and the characterization of pumps, valves and heat losses; 2. The 'RDS-test' (Reference Data Set for the selected test of the facility): this includes the description of the main phenomena investigated during the test, the configuration of the facility for the selected test (possible new evaluation of pressure and heat losses if needed) and the specific boundary and initial conditions; 3. The 'QP' (Qualification Report) of the code calculation results: this includes the description of the nodalization developed following a set of homogeneous techniques, the achievement of the steady state conditions and the qualitative and quantitative analysis of the transient with the characterization of the Relevant Thermal-Hydraulics Aspects (RTA); 4. The EH (Engineering Handbook) of the input nodalization

  3. Supporting qualified database for uncertainty evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Petruzzi, A.; Fiori, F.; Kovtonyuk, A.; D' Auria, F. [Nuclear Research Group of San Piero A Grado, Univ. of Pisa, Via Livornese 1291, 56122 Pisa (Italy)

    2012-07-01

    Uncertainty evaluation constitutes a key feature of BEPU (Best Estimate Plus Uncertainty) process. The uncertainty can be the result of a Monte Carlo type analysis involving input uncertainty parameters or the outcome of a process involving the use of experimental data and connected code calculations. Those uncertainty methods are discussed in several papers and guidelines (IAEA-SRS-52, OECD/NEA BEMUSE reports). The present paper aims at discussing the role and the depth of the analysis required for merging from one side suitable experimental data and on the other side qualified code calculation results. This aspect is mostly connected with the second approach for uncertainty mentioned above, but it can be used also in the framework of the first approach. Namely, the paper discusses the features and structure of the database that includes the following kinds of documents: 1. The' RDS-facility' (Reference Data Set for the selected facility): this includes the description of the facility, the geometrical characterization of any component of the facility, the instrumentations, the data acquisition system, the evaluation of pressure losses, the physical properties of the material and the characterization of pumps, valves and heat losses; 2. The 'RDS-test' (Reference Data Set for the selected test of the facility): this includes the description of the main phenomena investigated during the test, the configuration of the facility for the selected test (possible new evaluation of pressure and heat losses if needed) and the specific boundary and initial conditions; 3. The 'QR' (Qualification Report) of the code calculation results: this includes the description of the nodalization developed following a set of homogeneous techniques, the achievement of the steady state conditions and the qualitative and quantitative analysis of the transient with the characterization of the Relevant Thermal-Hydraulics Aspects (RTA); 4. The EH (Engineering

  4. A Bayesian approach for quantification of model uncertainty

    International Nuclear Information System (INIS)

    Park, Inseok; Amarchinta, Hemanth K.; Grandhi, Ramana V.

    2010-01-01

    In most engineering problems, more than one model can be created to represent an engineering system's behavior. Uncertainty is inevitably involved in selecting the best model from among the models that are possible. Uncertainty in model selection cannot be ignored, especially when the differences between the predictions of competing models are significant. In this research, a methodology is proposed to quantify model uncertainty using measured differences between experimental data and model outcomes under a Bayesian statistical framework. The adjustment factor approach is used to propagate model uncertainty into prediction of a system response. A nonlinear vibration system is used to demonstrate the processes for implementing the adjustment factor approach. Finally, the methodology is applied on the engineering benefits of a laser peening process, and a confidence band for residual stresses is established to indicate the reliability of model prediction.

  5. Experimental device for obtaining calibration factor for the total count technique

    International Nuclear Information System (INIS)

    Gonçalves, Eduardo R.; Braz, Delson; Brandão, Luís Eduardo B.

    2017-01-01

    Nuclear technologies have widely used on industry plants in order to help to solve troubles processes/design or just obtain information of them. The Total Count technique for flow measurement has as main advantages: being an absolute technique, because it is independent of additional devices readings unless the directly used for recording the radioactive cloud, requiring only a single detector to provide the final result; the independence of the internal volume of the transport duct, can be applied in the presence or absence of obstructions; no restriction as to the nature of the product or material to be conveyed; it is a noninvasive technique which allows real-time diagnostics. To use Total Count Technique, knowledge of a geometric calibration factor is required. Called Factor F, it is obtained in the laboratory using an experimental apparatus to faithfully reproduce the geometry of the detection system and the pipeline that being analyzed and using the same radiotracer, therefore, its value is constant for each specific measuring system under survey. This experimental apparatus for obtaining the factor F consisting by a pipe of 2 ″PVC, which simulates a transmission line, where they were deposited 500 ml oil and the use of a specific pipette for use viscous fluids were added sequentially aliquots (50.00 ± 0.01) μl radiotracer (radionuclide photopeak energy of 198 Au 411.8 keV) and analyzing data obtained by three distinct detection systems composed of detectors NaI scintillators 1″ x 1 ″ and a data acquisition system. (author)

  6. Experimental device for obtaining calibration factor for the total count technique

    Energy Technology Data Exchange (ETDEWEB)

    Gonçalves, Eduardo R.; Braz, Delson [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Brandão, Luís Eduardo B. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Divisao de Reatores

    2017-07-01

    Nuclear technologies have widely used on industry plants in order to help to solve troubles processes/design or just obtain information of them. The Total Count technique for flow measurement has as main advantages: being an absolute technique, because it is independent of additional devices readings unless the directly used for recording the radioactive cloud, requiring only a single detector to provide the final result; the independence of the internal volume of the transport duct, can be applied in the presence or absence of obstructions; no restriction as to the nature of the product or material to be conveyed; it is a noninvasive technique which allows real-time diagnostics. To use Total Count Technique, knowledge of a geometric calibration factor is required. Called Factor F, it is obtained in the laboratory using an experimental apparatus to faithfully reproduce the geometry of the detection system and the pipeline that being analyzed and using the same radiotracer, therefore, its value is constant for each specific measuring system under survey. This experimental apparatus for obtaining the factor F consisting by a pipe of 2 ″PVC, which simulates a transmission line, where they were deposited 500 ml oil and the use of a specific pipette for use viscous fluids were added sequentially aliquots (50.00 ± 0.01) μl radiotracer (radionuclide photopeak energy of 198 Au 411.8 keV) and analyzing data obtained by three distinct detection systems composed of detectors NaI scintillators 1″ x 1 ″ and a data acquisition system. (author)

  7. Decision making with epistemic uncertainty under safety constraints: An application to seismic design

    Science.gov (United States)

    Veneziano, D.; Agarwal, A.; Karaca, E.

    2009-01-01

    The problem of accounting for epistemic uncertainty in risk management decisions is conceptually straightforward, but is riddled with practical difficulties. Simple approximations are often used whereby future variations in epistemic uncertainty are ignored or worst-case scenarios are postulated. These strategies tend to produce sub-optimal decisions. We develop a general framework based on Bayesian decision theory and exemplify it for the case of seismic design of buildings. When temporal fluctuations of the epistemic uncertainties and regulatory safety constraints are included, the optimal level of seismic protection exceeds the normative level at the time of construction. Optimal Bayesian decisions do not depend on the aleatory or epistemic nature of the uncertainties, but only on the total (epistemic plus aleatory) uncertainty and how that total uncertainty varies randomly during the lifetime of the project. ?? 2009 Elsevier Ltd. All rights reserved.

  8. Uncertainty information in climate data records from Earth observation

    Science.gov (United States)

    Merchant, Christopher J.; Paul, Frank; Popp, Thomas; Ablain, Michael; Bontemps, Sophie; Defourny, Pierre; Hollmann, Rainer; Lavergne, Thomas; Laeng, Alexandra; de Leeuw, Gerrit; Mittaz, Jonathan; Poulsen, Caroline; Povey, Adam C.; Reuter, Max; Sathyendranath, Shubha; Sandven, Stein; Sofieva, Viktoria F.; Wagner, Wolfgang

    2017-07-01

    The question of how to derive and present uncertainty information in climate data records (CDRs) has received sustained attention within the European Space Agency Climate Change Initiative (CCI), a programme to generate CDRs addressing a range of essential climate variables (ECVs) from satellite data. Here, we review the nature, mathematics, practicalities, and communication of uncertainty information in CDRs from Earth observations. This review paper argues that CDRs derived from satellite-based Earth observation (EO) should include rigorous uncertainty information to support the application of the data in contexts such as policy, climate modelling, and numerical weather prediction reanalysis. Uncertainty, error, and quality are distinct concepts, and the case is made that CDR products should follow international metrological norms for presenting quantified uncertainty. As a baseline for good practice, total standard uncertainty should be quantified per datum in a CDR, meaning that uncertainty estimates should clearly discriminate more and less certain data. In this case, flags for data quality should not duplicate uncertainty information, but instead describe complementary information (such as the confidence in the uncertainty estimate provided or indicators of conditions violating the retrieval assumptions). The paper discusses the many sources of error in CDRs, noting that different errors may be correlated across a wide range of timescales and space scales. Error effects that contribute negligibly to the total uncertainty in a single-satellite measurement can be the dominant sources of uncertainty in a CDR on the large space scales and long timescales that are highly relevant for some climate applications. For this reason, identifying and characterizing the relevant sources of uncertainty for CDRs is particularly challenging. The characterization of uncertainty caused by a given error effect involves assessing the magnitude of the effect, the shape of the

  9. Covariance methodology applied to uncertainties in I-126 disintegration rate measurements

    International Nuclear Information System (INIS)

    Fonseca, K.A.; Koskinas, M.F.; Dias, M.S.

    1996-01-01

    The covariance methodology applied to uncertainties in 126 I disintegration rate measurements is described. Two different coincidence systems were used due to the complex decay scheme of this radionuclide. The parameters involved in the determination of the disintegration rate in each experimental system present correlated components. In this case, the conventional statistical methods to determine the uncertainties (law of propagation) result in wrong values for the final uncertainty. Therefore, use of the methodology of the covariance matrix is necessary. The data from both systems were combined taking into account all possible correlations between the partial uncertainties. (orig.)

  10. Predictive uncertainty reduction in coupled neutron-kinetics/thermal hydraulics modeling of the BWR-TT2 benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Badea, Aurelian F., E-mail: aurelian.badea@kit.edu [Karlsruhe Institute of Technology, Vincenz-Prießnitz-Str. 3, 76131 Karlsruhe (Germany); Cacuci, Dan G. [Center for Nuclear Science and Energy/Dept. of ME, University of South Carolina, 300 Main Street, Columbia, SC 29208 (United States)

    2017-03-15

    Highlights: • BWR Turbine Trip 2 (BWR-TT2) benchmark. • Substantial (up to 50%) reduction of uncertainties in the predicted transient power. • 6660 uncertain model parameters were calibrated. - Abstract: By applying a comprehensive predictive modeling methodology, this work demonstrates a substantial (up to 50%) reduction of uncertainties in the predicted total transient power in the BWR Turbine Trip 2 (BWR-TT2) benchmark while calibrating the numerical simulation of this benchmark, comprising 6090 macroscopic cross sections, and 570 thermal-hydraulics parameters involved in modeling the phase-slip correlation, transient outlet pressure, and total mass flow. The BWR-TT2 benchmark is based on an experiment that was carried out in 1977 in the NPP Peach Bottom 2, involving the closure of the turbine stop valve which caused a pressure wave that propagated with attenuation into the reactor core. The condensation of the steam in the reactor core caused by the pressure increase led to a positive reactivity insertion. The subsequent rise of power was limited by the feedback and the insertion of the control rods. The BWR-TT2 benchmark was modeled with the three-dimensional reactor physics code system DYN3D, by coupling neutron kinetics with two-phase thermal-hydraulics. All 6660 DYN3D model parameters were calibrated by applying a predictive modeling methodology that combines experimental and computational information to produce optimally predicted best-estimate results with reduced predicted uncertainties. Simultaneously, the predictive modeling methodology yields optimally predicted values for the BWR total transient power while reducing significantly the accompanying predicted standard deviations.

  11. Predictive uncertainty reduction in coupled neutron-kinetics/thermal hydraulics modeling of the BWR-TT2 benchmark

    International Nuclear Information System (INIS)

    Badea, Aurelian F.; Cacuci, Dan G.

    2017-01-01

    Highlights: • BWR Turbine Trip 2 (BWR-TT2) benchmark. • Substantial (up to 50%) reduction of uncertainties in the predicted transient power. • 6660 uncertain model parameters were calibrated. - Abstract: By applying a comprehensive predictive modeling methodology, this work demonstrates a substantial (up to 50%) reduction of uncertainties in the predicted total transient power in the BWR Turbine Trip 2 (BWR-TT2) benchmark while calibrating the numerical simulation of this benchmark, comprising 6090 macroscopic cross sections, and 570 thermal-hydraulics parameters involved in modeling the phase-slip correlation, transient outlet pressure, and total mass flow. The BWR-TT2 benchmark is based on an experiment that was carried out in 1977 in the NPP Peach Bottom 2, involving the closure of the turbine stop valve which caused a pressure wave that propagated with attenuation into the reactor core. The condensation of the steam in the reactor core caused by the pressure increase led to a positive reactivity insertion. The subsequent rise of power was limited by the feedback and the insertion of the control rods. The BWR-TT2 benchmark was modeled with the three-dimensional reactor physics code system DYN3D, by coupling neutron kinetics with two-phase thermal-hydraulics. All 6660 DYN3D model parameters were calibrated by applying a predictive modeling methodology that combines experimental and computational information to produce optimally predicted best-estimate results with reduced predicted uncertainties. Simultaneously, the predictive modeling methodology yields optimally predicted values for the BWR total transient power while reducing significantly the accompanying predicted standard deviations.

  12. Evaluation of uncertainties in the calibration of radiation survey meter

    International Nuclear Information System (INIS)

    Potiens, M.P.A.; Santos, G.P.

    2006-01-01

    In order to meet the requirements of ISO 17025, the quantification of the expanded uncertainties of experimental data in the calibration of survey meters must be carried out using well defined concepts, like those expressed in the 'ISO-Guide to the Expression of Uncertainty in Measurement'. The calibration procedure of gamma ray survey meters involves two values that have to get their uncertainties clearly known: measurements of the instrument under calibration and the conventional true values of a quantity. Considering the continuous improvement of the calibration methods and set-ups, it is necessary to evaluate periodically the involved uncertainties in the procedures. In this work it is shown how the measurement uncertainties of an individual calibration can be estimated and how it can be generalized to be valid for others radiation survey meters. (authors)

  13. Local conditions and uncertainty bands for Semiscale Test S-02-9

    International Nuclear Information System (INIS)

    Varacalle, D.J. Jr.

    1979-01-01

    Analysis was performed to derive local conditions heat transfer parameters and their uncertainties using computer codes and experimentally derived boundary conditions for the Semiscale core for LOCA Test S-02-9. Calculations performed consisted of nominal code cases using best-estimate input parameters and cases where the specified input parameters were perturbed in accordance with the response surface method of uncertainty analysis. The output parameters of interest were those that are used in film boiling heat transfer correlations including enthalpy, pressure, quality, and coolant flow rate. Large uncertainty deviations occurred during low core mass flow periods where the relative flow uncertainties were large. Utilizing the derived local conditions and their associated uncertainties, a study was then made which showed the uncertainty in film boiling heat transfer coefficient varied between 5 and 250%

  14. Uncertainty analysis of a one-dimensional constitutive model for shape memory alloy thermomechanical description

    DEFF Research Database (Denmark)

    Oliveira, Sergio A.; Savi, Marcelo A.; Santos, Ilmar F.

    2014-01-01

    The use of shape memory alloys (SMAs) in engineering applications has increased the interest of the accuracy analysis of their thermomechanical description. This work presents an uncertainty analysis related to experimental tensile tests conducted with shape memory alloy wires. Experimental data...... are compared with numerical simulations obtained from a constitutive model with internal constraints employed to describe the thermomechanical behavior of SMAs. The idea is to evaluate if the numerical simulations are within the uncertainty range of the experimental data. Parametric analysis is also developed...

  15. Application of extended statistical combination of uncertainties methodology for digital nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    In, Wang Ki; Uh, Keun Sun; Chul, Kim Heui [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-02-01

    A technically more direct statistical combinations of uncertainties methodology, extended SCU (XSCU), was applied to statistically combine the uncertainties associated with the DNBR alarm setpoint and the DNBR trip setpoint of digital nuclear power plants. The modified SCU (MSCU) methodology is currently used as the USNRC approved design methodology to perform the same function. In this report, the MSCU and XSCU methodologies were compared in terms of the total uncertainties and the net margins to the DNBR alarm and trip setpoints. The MSCU methodology resulted in the small total penalties due to a significantly negative bias which are quite large. However the XSCU methodology gave the virtually unbiased total uncertainties. The net margins to the DNBR alarm and trip setpoints by the MSCU methodology agree with those by the XSCU methodology within statistical variations. (Author) 12 refs., 17 figs., 5 tabs.

  16. Sensitivity functions for uncertainty analysis: Sensitivity and uncertainty analysis of reactor performance parameters

    International Nuclear Information System (INIS)

    Greenspan, E.

    1982-01-01

    This chapter presents the mathematical basis for sensitivity functions, discusses their physical meaning and information they contain, and clarifies a number of issues concerning their application, including the definition of group sensitivities, the selection of sensitivity functions to be included in the analysis, and limitations of sensitivity theory. Examines the theoretical foundation; criticality reset sensitivities; group sensitivities and uncertainties; selection of sensitivities included in the analysis; and other uses and limitations of sensitivity functions. Gives the theoretical formulation of sensitivity functions pertaining to ''as-built'' designs for performance parameters of the form of ratios of linear flux functionals (such as reaction-rate ratios), linear adjoint functionals, bilinear functions (such as reactivity worth ratios), and for reactor reactivity. Offers a consistent procedure for reducing energy-dependent or fine-group sensitivities and uncertainties to broad group sensitivities and uncertainties. Provides illustrations of sensitivity functions as well as references to available compilations of such functions and of total sensitivities. Indicates limitations of sensitivity theory originating from the fact that this theory is based on a first-order perturbation theory

  17. Post-test calculation and uncertainty analysis of the experiment QUENCH-07 with the system code ATHLET-CD

    International Nuclear Information System (INIS)

    Austregesilo, Henrique; Bals, Christine; Trambauer, Klaus

    2007-01-01

    In the frame of developmental assessment and code validation, a post-test calculation of the test QUENCH-07 was performed with ATHLET-CD. The system code ATHLET-CD is being developed for best-estimate simulation of accidents with core degradation and for evaluation of accident management procedures. It applies the detailed models of the thermal-hydraulic code ATHLET in an efficient coupling with dedicated models for core degradation and fission products behaviour. The first step of the work was the simulation of the test QUENCH-07 applying the modelling options recommended in the code User's Manual (reference calculation). The global results of this calculation showed a good agreement with the measured data. This calculation was complemented by a sensitivity analysis in order to investigate the influence of a combined variation of code input parameters on the simulation of the main phenomena observed experimentally. Results of this sensitivity analysis indicate that the main experimental measurements lay within the uncertainty range of the corresponding calculated values. Among the main contributors to the uncertainty of code results are the heat transfer coefficient due to forced convection to superheated steam-argon mixture, the thermal conductivity of the shroud isolation and the external heater rod resistance. Uncertainties on modelling of B 4 C oxidation do not affect significantly the total calculated hydrogen release rates

  18. On economic resolution and uncertainty in hydrocarbon exploration assessment

    International Nuclear Information System (INIS)

    Lerche, I.

    1998-01-01

    When assessment of parameters of a decision tree for a hydrocarbon exploration project can lie within estimated ranges, it is shown that the ensemble average expected value has two sorts of uncertainties: one is due to the expected value of each realization of the decision tree being different than the average; the second is due to intrinsic variance of each decision tree. The total standard error of the average expected value combines both sorts. The use of additional statistical measures, such as standard error, volatility, and cumulative probability of making a profit, provide insight into the selection process leading to a more appropriate decision. In addition, the use of relative contributions and relative importance for the uncertainty measures guides one to a better determination of those parameters that dominantly influence the total ensemble uncertainty. In this way one can concentrate resources on efforts to minimize the uncertainty ranges of such dominant parameters. A numerical illustration is provided to indicate how such calculations can be performed simply with a hand calculator. (author)

  19. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    Science.gov (United States)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model

  20. Limited entropic uncertainty as new principle of quantum physics

    International Nuclear Information System (INIS)

    Ion, D.B.; Ion, M.L.

    2001-01-01

    The Uncertainty Principle (UP) of quantum mechanics discovered by Heisenberg, which constitute the corner-stone of quantum physics, asserts that: there is an irreducible lower bound on the uncertainty in the result of a simultaneous measurement of non-commuting observables. In order to avoid this state-dependence many authors proposed to use the information entropy as a measure of the uncertainty instead of above standard quantitative formulation of the Heisenberg uncertainty principle. In this paper the Principle of Limited Entropic Uncertainty (LEU-Principle), as a new principle in quantum physics, is proved. Then, consistent experimental tests of the LEU-principle, obtained by using the available 49 sets of the pion-nucleus phase shifts, are presented for both, extensive (q=1) and nonextensive (q=0.5 and q=2.0) cases. Some results obtained by the application of LEU-Principle to the diffraction phenomena are also discussed. The main results and conclusions of our paper can be summarized as follows: (i) We introduced a new principle in quantum physics namely the Principle of Limited Entropic Uncertainty (LEU-Principle). This new principle includes in a more general and exact form not only the old Heisenberg uncertainty principle but also introduce an upper limit on the magnitude of the uncertainty in the quantum physics. The LEU-Principle asserts that: 'there is an irreducible lower bound as well as an upper bound on the uncertainty in the result of a simultaneous measurement of non-commuting observables for any extensive and nonextensive (q ≥ 0) quantum systems'; (ii) Two important concrete realizations of the LEU-Principle are explicitly obtained in this paper, namely: (a) the LEU-inequalities for the quantum scattering of spinless particles and (b) the LEU-inequalities for the diffraction on single slit of width 2a. In particular from our general results, in the limit y → +1 we recover in an exact form all the results previously reported. In our paper an

  1. Factorial design analysis on the solubility of total mercury in reduction process = Análise do processo experimental na solubilidade do mercúrio total em processo redutivo

    Directory of Open Access Journals (Sweden)

    Raquel Dalla Costa

    2007-07-01

    Full Text Available The dental wastewater can contribute to the total daily mercury load on the environment. Factorial design of experiments is useful to analyze factors that influence this solubility. The aim of the present study was to design experiments to examine the effects ofoperational variables, humic acid, temperature, pH and contact time that may affect the solubility of total mercury as dental amalgam residue in reduction process. Based on the factorial design of experiments, the humic acid concentration was the most significant factor in this process, followed by other factors. The parameters affecting the solubility of total mercury showed that when the [HA], T and CT increases and pH decreases there is an important increase of total mercury concentration in process. For the tested conditions, thehigh total mercury concentration was obtained using the humic acid concentration = 1.0 g L-1, temperature = 35oC, pH = 4.0 and contact time = 10 days.O esgoto odontológico pode contribuir na carga total de mercúrio noambiente. O estudo do planejamento experimental é útil para analisar os fatores que influenciam nesta solubilidade. O objetivo deste trabalho foi realizar um planejamento experimental para analisar os efeitos das variáveis operacionais, ácido húmico, temperatura,pH e tempo de contato, que podem afetar a solubilidade do mercúrio total como amálgama odontológico em um processo de redução. Baseado no planejamento experimental, a concentração de ácido húmico foi o fator mais significativo no processo, seguido dos demais fatores. Os parâmetros que afetam a solubilidade do mercúrio total mostram que quando a [AH], T e TC aumentam e o pH diminui há um aumento significativo na concentração de mercúrio total no processo. A maior concentração de mercúrio total foi obtido nas condições de concentração de ácido húmico = 1,0 g L-1, temperatura = 35oC, pH = 4,0 e tempo de contato = 10 dias.

  2. Price Uncertainty in Linear Production Situations

    NARCIS (Netherlands)

    Suijs, J.P.M.

    1999-01-01

    This paper analyzes linear production situations with price uncertainty, and shows that the corrresponding stochastic linear production games are totally balanced. It also shows that investment funds, where investors pool their individual capital for joint investments in financial assets, fit into

  3. Uncertainty Quantification in High Throughput Screening ...

    Science.gov (United States)

    Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of biochemical and cellular processes, including endocrine disruption, cytotoxicity, and zebrafish development. Over 2.6 million concentration response curves are fit to models to extract parameters related to potency and efficacy. Models built on ToxCast results are being used to rank and prioritize the toxicological risk of tested chemicals and to predict the toxicity of tens of thousands of chemicals not yet tested in vivo. However, the data size also presents challenges. When fitting the data, the choice of models, model selection strategy, and hit call criteria must reflect the need for computational efficiency and robustness, requiring hard and somewhat arbitrary cutoffs. When coupled with unavoidable noise in the experimental concentration response data, these hard cutoffs cause uncertainty in model parameters and the hit call itself. The uncertainty will then propagate through all of the models built on the data. Left unquantified, this uncertainty makes it difficult to fully interpret the data for risk assessment. We used bootstrap resampling methods to quantify the uncertainty in fitting models to the concentration response data. Bootstrap resampling determines confidence intervals for

  4. Subspace-based Inverse Uncertainty Quantification for Nuclear Data Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Khuwaileh, B.A., E-mail: bakhuwai@ncsu.edu; Abdel-Khalik, H.S.

    2015-01-15

    Safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. An inverse problem can be defined and solved to assess the sources of uncertainty, and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this work a subspace-based algorithm for inverse sensitivity/uncertainty quantification (IS/UQ) has been developed to enable analysts account for all sources of nuclear data uncertainties in support of target accuracy assessment-type analysis. An approximate analytical solution of the optimization problem is used to guide the search for the dominant uncertainty subspace. By limiting the search to a subspace, the degrees of freedom available for the optimization search are significantly reduced. A quarter PWR fuel assembly is modeled and the accuracy of the multiplication factor and the fission reaction rate are used as reactor attributes whose uncertainties are to be reduced. Numerical experiments are used to demonstrate the computational efficiency of the proposed algorithm. Our ongoing work is focusing on extending the proposed algorithm to account for various forms of feedback, e.g., thermal-hydraulics and depletion effects.

  5. Sonic Boom Pressure Signature Uncertainty Calculation and Propagation to Ground Noise

    Science.gov (United States)

    West, Thomas K., IV; Bretl, Katherine N.; Walker, Eric L.; Pinier, Jeremy T.

    2015-01-01

    The objective of this study was to outline an approach for the quantification of uncertainty in sonic boom measurements and to investigate the effect of various near-field uncertainty representation approaches on ground noise predictions. These approaches included a symmetric versus asymmetric uncertainty band representation and a dispersion technique based on a partial sum Fourier series that allows for the inclusion of random error sources in the uncertainty. The near-field uncertainty was propagated to the ground level, along with additional uncertainty in the propagation modeling. Estimates of perceived loudness were obtained for the various types of uncertainty representation in the near-field. Analyses were performed on three configurations of interest to the sonic boom community: the SEEB-ALR, the 69o DeltaWing, and the LM 1021-01. Results showed that representation of the near-field uncertainty plays a key role in ground noise predictions. Using a Fourier series based dispersion approach can double the amount of uncertainty in the ground noise compared to a pure bias representation. Compared to previous computational fluid dynamics results, uncertainty in ground noise predictions were greater when considering the near-field experimental uncertainty.

  6. Uncertainties in scaling factors for ab initio vibrational zero-point energies

    Science.gov (United States)

    Irikura, Karl K.; Johnson, Russell D.; Kacker, Raghu N.; Kessel, Rüdiger

    2009-03-01

    Vibrational zero-point energies (ZPEs) determined from ab initio calculations are often scaled by empirical factors. An empirical scaling factor partially compensates for the effects arising from vibrational anharmonicity and incomplete treatment of electron correlation. These effects are not random but are systematic. We report scaling factors for 32 combinations of theory and basis set, intended for predicting ZPEs from computed harmonic frequencies. An empirical scaling factor carries uncertainty. We quantify and report, for the first time, the uncertainties associated with scaling factors for ZPE. The uncertainties are larger than generally acknowledged; the scaling factors have only two significant digits. For example, the scaling factor for B3LYP/6-31G(d) is 0.9757±0.0224 (standard uncertainty). The uncertainties in the scaling factors lead to corresponding uncertainties in predicted ZPEs. The proposed method for quantifying the uncertainties associated with scaling factors is based upon the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. We also present a new reference set of 60 diatomic and 15 polyatomic "experimental" ZPEs that includes estimated uncertainties.

  7. Analysis of uncertainties of thermal hydraulic calculations

    International Nuclear Information System (INIS)

    Macek, J.; Vavrin, J.

    2002-12-01

    In 1993-1997 it was proposed, within OECD projects, that a common program should be set up for uncertainty analysis by a probabilistic method based on a non-parametric statistical approach for system computer codes such as RELAP, ATHLET and CATHARE and that a method should be developed for statistical analysis of experimental databases for the preparation of the input deck and statistical analysis of the output calculation results. Software for such statistical analyses would then have to be processed as individual tools independent of the computer codes used for the thermal hydraulic analysis and programs for uncertainty analysis. In this context, a method for estimation of a thermal hydraulic calculation is outlined and selected methods of statistical analysis of uncertainties are described, including methods for prediction accuracy assessment based on the discrete Fourier transformation principle. (author)

  8. Uncertainty quantification an accelerated course with advanced applications in computational engineering

    CERN Document Server

    Soize, Christian

    2017-01-01

    This book presents the fundamental notions and advanced mathematical tools in the stochastic modeling of uncertainties and their quantification for large-scale computational models in sciences and engineering. In particular, it focuses in parametric uncertainties, and non-parametric uncertainties with applications from the structural dynamics and vibroacoustics of complex mechanical systems, from micromechanics and multiscale mechanics of heterogeneous materials. Resulting from a course developed by the author, the book begins with a description of the fundamental mathematical tools of probability and statistics that are directly useful for uncertainty quantification. It proceeds with a well carried out description of some basic and advanced methods for constructing stochastic models of uncertainties, paying particular attention to the problem of calibrating and identifying a stochastic model of uncertainty when experimental data is available. < This book is intended to be a graduate-level textbook for stu...

  9. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    International Nuclear Information System (INIS)

    Bess, John D.; Montierth, Leland; Köberl, Oliver

    2014-01-01

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the 235 U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of k eff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of k eff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments

  10. Facing uncertainty in ecosystem services-based resource management.

    Science.gov (United States)

    Grêt-Regamey, Adrienne; Brunner, Sibyl H; Altwegg, Jürg; Bebi, Peter

    2013-09-01

    The concept of ecosystem services is increasingly used as a support for natural resource management decisions. While the science for assessing ecosystem services is improving, appropriate methods to address uncertainties in a quantitative manner are missing. Ignoring parameter uncertainties, modeling uncertainties and uncertainties related to human-environment interactions can modify decisions and lead to overlooking important management possibilities. In this contribution, we present a new approach for mapping the uncertainties in the assessment of multiple ecosystem services. The spatially explicit risk approach links Bayesian networks to a Geographic Information System for forecasting the value of a bundle of ecosystem services and quantifies the uncertainties related to the outcomes in a spatially explicit manner. We demonstrate that mapping uncertainties in ecosystem services assessments provides key information for decision-makers seeking critical areas in the delivery of ecosystem services in a case study in the Swiss Alps. The results suggest that not only the total value of the bundle of ecosystem services is highly dependent on uncertainties, but the spatial pattern of the ecosystem services values changes substantially when considering uncertainties. This is particularly important for the long-term management of mountain forest ecosystems, which have long rotation stands and are highly sensitive to pressing climate and socio-economic changes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Evaluation of the theoretical uncertainties in the W → lν cross sections at the LHC

    International Nuclear Information System (INIS)

    Adam, Nadia E.; Halyo, Valerie; Zhu Wenhan; Yost, Scott A.

    2008-01-01

    We study the sources of systematic errors in the measurement of the W → lν cross-sections at the LHC. We consider the systematic errors in both the total cross-section and acceptance for anticipated experimental cuts. We include the best available analysis of QCD effects at NNLO in assessing the effect of higher order corrections and PDF and scale uncertainties on the theoretical acceptance. In addition, we evaluate the error due to missing NLO electroweak corrections and propose which MC generators and computational schemes should be implemented to best simulate the events.

  12. Sensitivity and uncertainty analysis of NET/ITER shielding blankets

    International Nuclear Information System (INIS)

    Hogenbirk, A.; Gruppelaar, H.; Verschuur, K.A.

    1990-09-01

    Results are presented of sensitivity and uncertainty calculations based upon the European fusion file (EFF-1). The effect of uncertainties in Fe, Cr and Ni cross sections on the nuclear heating in the coils of a NET/ITER shielding blanket has been studied. The analysis has been performed for the total cross section as well as partial cross sections. The correct expression for the sensitivity profile was used, including the gain term. The resulting uncertainty in the nuclear heating lies between 10 and 20 per cent. (author). 18 refs.; 2 figs.; 2 tabs

  13. Estimating uncertainty of inference for validation

    Energy Technology Data Exchange (ETDEWEB)

    Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  14. Measurements of inelastic, elastic and total pp cross-sections at the LHC with ATLAS

    CERN Document Server

    Trottier-McDonald, Michel; The ATLAS collaboration

    2015-01-01

    First, a recent measurement of the inelastic cross section using the ATLAS detector with 63 $\\mu b^{−1}$ of proton-proton collisions at $\\sqrt{s}=13$ TeV is presented. The measurement is performed using scintillators mounted in front of the forward calorimeters. A cross section of $65.2\\pm0.8$ (exp.) $\\pm5.9$ (lum.) mb is measured in the fiducial region $M_X>13$ GeV, where $M_X$ is the larger of the dissociation masses of the two proton systems in diffractive events. The experimental uncertainty is indicated by (exp.) and the luminosity uncertainty by (lum.). The full inelastic cross section is determined to be $73.1\\pm0.9$ (exp.) $\\pm6.6$ (lum.) $\\pm3.8$ (extr.) mb, where (extr.) indicates model-dependent uncertainties on the extrapolation from the fiducial region. The measured value is about one standard deviation below most current theoretical predictions. Second, a measurement of the total $pp$ cross section at the LHC at $\\sqrt{s}=7$ TeV is presented. In a special run with high-$\\beta^*$ beam optics, a...

  15. Impatience and uncertainty : Experimental decisions predict adolescents' field behavior

    NARCIS (Netherlands)

    Sutter, M.; Kocher, M.G.; Rützler, D.; Trautmann, S.T.

    2013-01-01

    We study risk attitudes, ambiguity attitudes, and time preferences of 661 children and adolescents, aged ten to eighteen years, in an incentivized experiment and relate experimental choices to field behavior. Experimental measures of impatience are found to be significant predictors of

  16. New photon-nucleon dispersion relation for evaluating the Thomson limit using rising total cross sections

    International Nuclear Information System (INIS)

    Dean, N.W.

    1978-01-01

    New data showing that the photon-nucleon total cross section increases with energy for ν > or = 50 GeV invalidate earlier comparisons with dispersion relations. Parametrization of the data are presented and used in a new formulation of the dispersion relations, in which an assumed asymptotic behavior avoids the need for subtraction. With this form the fitted amplitude can be compared directly with the Thomson limit. The experimental uncertainties are shown to have a significant effect upon such a comparison

  17. Uncertainty of a hydrological climate change impact assessment - Is it really all about climate uncertainty?

    Science.gov (United States)

    Honti, Mark; Reichert, Peter; Scheidegger, Andreas; Stamm, Christian

    2013-04-01

    Climate change impact assessments have become more and more popular in hydrology since the middle 1980's with another boost after the publication of the IPCC AR4 report. During hundreds of impact studies a quasi-standard methodology emerged, which is mainly shaped by the growing public demand for predicting how water resources management or flood protection should change in the close future. The ``standard'' workflow considers future climate under a specific IPCC emission scenario simulated by global circulation models (GCMs), possibly downscaled by a regional climate model (RCM) and/or a stochastic weather generator. The output from the climate models is typically corrected for bias before feeding it into a calibrated hydrological model, which is run on the past and future meteorological data to analyse the impacts of climate change on the hydrological indicators of interest. The impact predictions are as uncertain as any forecast that tries to describe the behaviour of an extremely complex system decades into the future. Future climate predictions are uncertain due to the scenario uncertainty and the GCM model uncertainty that is obvious on finer resolution than continental scale. Like in any hierarchical model system, uncertainty propagates through the descendant components. Downscaling increases uncertainty with the deficiencies of RCMs and/or weather generators. Bias correction adds a strong deterministic shift to the input data. Finally the predictive uncertainty of the hydrological model ends the cascade that leads to the total uncertainty of the hydrological impact assessment. There is an emerging consensus between many studies on the relative importance of the different uncertainty sources. The prevailing perception is that GCM uncertainty dominates hydrological impact studies. There are only few studies, which found that the predictive uncertainty of hydrological models can be in the same range or even larger than climatic uncertainty. We carried out a

  18. Propagation of nuclear data uncertainties for fusion power measurements

    Directory of Open Access Journals (Sweden)

    Sjöstrand Henrik

    2017-01-01

    Full Text Available Neutron measurements using neutron activation systems are an essential part of the diagnostic system at large fusion machines such as JET and ITER. Nuclear data is used to infer the neutron yield. Consequently, high-quality nuclear data is essential for the proper determination of the neutron yield and fusion power. However, uncertainties due to nuclear data are not fully taken into account in uncertainty analysis for neutron yield calibrations using activation foils. This paper investigates the neutron yield uncertainty due to nuclear data using the so-called Total Monte Carlo Method. The work is performed using a detailed MCNP model of the JET fusion machine; the uncertainties due to the cross-sections and angular distributions in JET structural materials, as well as the activation cross-sections in the activation foils, are analysed. It is found that a significant contribution to the neutron yield uncertainty can come from uncertainties in the nuclear data.

  19. Uncertainty and conservatism in safety evaluations based on a BEPU approach

    International Nuclear Information System (INIS)

    Yamaguchi, A.; Mizokami, S.; Kudo, Y.; Hotta, A.

    2009-01-01

    Atomic Energy Society of Japan has published 'Standard Method for Safety Evaluation using Best Estimate Code Based on Uncertainty and Scaling Analyses with Statistical Approach' to be applied to accidents and AOOs in the safety evaluation of LWRs. In this method, hereafter named as the AESJ-SSE (Statistical Safety Evaluation) method, identification and quantification of uncertainties will be performed and then a combination of the best estimate code and the evaluation of uncertainty propagation will be performed. Uncertainties are categorized into bias and variability. In general, bias is related to our state-of-knowledge on uncertainty objects (modeling, scaling, input data, etc.) while variability reflects stochastic features involved in these objects. Considering many kinds of uncertainties in thermal-hydraulics models and experimental databases show variabilities that will be strongly influenced by our state of knowledge, it seems reasonable that these variabilities are also related to state-of-knowledge. The design basis events (DBEs) that are employed for licensing analyses form a main part of the given or prior conservatism. The regulatory acceptance criterion is also regarded as the prior conservatism. In addition to these prior conservatisms, a certain amount of the posterior conservatism is added with maintaining intimate relationships with state-of-knowledge. In the AESJ-SSE method, this posterior conservatism can be incorporated into the safety evaluation in a combination of the following three ways, (1) broadening ranges of variability relevant to uncertainty objects, (2) employing more disadvantageous biases relevant to uncertainty objects and (3) adding an extra bias to the safety evaluation results. Knowing implemented quantitative bases of uncertainties and conservatism, the AESJ-SSE method provides a useful ground for rational decision-making. In order to seek for 'the best estimation' as well as reasonably setting the analytical margin, a degree

  20. An Integrated Approach for Characterization of Uncertainty in Complex Best Estimate Safety Assessment

    International Nuclear Information System (INIS)

    Pourgol-Mohamad, Mohammad; Modarres, Mohammad; Mosleh, Ali

    2013-01-01

    This paper discusses an approach called Integrated Methodology for Thermal-Hydraulics Uncertainty Analysis (IMTHUA) to characterize and integrate a wide range of uncertainties associated with the best estimate models and complex system codes used for nuclear power plant safety analyses. Examples of applications include complex thermal hydraulic and fire analysis codes. In identifying and assessing uncertainties, the proposed methodology treats the complex code as a 'white box', thus explicitly treating internal sub-model uncertainties in addition to the uncertainties related to the inputs to the code. The methodology accounts for uncertainties related to experimental data used to develop such sub-models, and efficiently propagates all uncertainties during best estimate calculations. Uncertainties are formally analyzed and probabilistically treated using a Bayesian inference framework. This comprehensive approach presents the results in a form usable in most other safety analyses such as the probabilistic safety assessment. The code output results are further updated through additional Bayesian inference using any available experimental data, for example from thermal hydraulic integral test facilities. The approach includes provisions to account for uncertainties associated with user-specified options, for example for choices among alternative sub-models, or among several different correlations. Complex time-dependent best-estimate calculations are computationally intense. The paper presents approaches to minimize computational intensity during the uncertainty propagation. Finally, the paper will report effectiveness and practicality of the methodology with two applications to a complex thermal-hydraulics system code as well as a complex fire simulation code. In case of multiple alternative models, several techniques, including dynamic model switching, user-controlled model selection, and model mixing, are discussed. (authors)

  1. The use of error and uncertainty methods in the medical laboratory.

    Science.gov (United States)

    Oosterhuis, Wytze P; Bayat, Hassan; Armbruster, David; Coskun, Abdurrahman; Freeman, Kathleen P; Kallner, Anders; Koch, David; Mackenzie, Finlay; Migliarino, Gabriel; Orth, Matthias; Sandberg, Sverre; Sylte, Marit S; Westgard, Sten; Theodorsson, Elvar

    2018-01-26

    Error methods - compared with uncertainty methods - offer simpler, more intuitive and practical procedures for calculating measurement uncertainty and conducting quality assurance in laboratory medicine. However, uncertainty methods are preferred in other fields of science as reflected by the guide to the expression of uncertainty in measurement. When laboratory results are used for supporting medical diagnoses, the total uncertainty consists only partially of analytical variation. Biological variation, pre- and postanalytical variation all need to be included. Furthermore, all components of the measuring procedure need to be taken into account. Performance specifications for diagnostic tests should include the diagnostic uncertainty of the entire testing process. Uncertainty methods may be particularly useful for this purpose but have yet to show their strength in laboratory medicine. The purpose of this paper is to elucidate the pros and cons of error and uncertainty methods as groundwork for future consensus on their use in practical performance specifications. Error and uncertainty methods are complementary when evaluating measurement data.

  2. Uncertainty Quantification and Comparison of Weld Residual Stress Measurements and Predictions.

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, John R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brooks, Dusty Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    In pressurized water reactors, the prevention, detection, and repair of cracks within dissimilar metal welds is essential to ensure proper plant functionality and safety. Weld residual stresses, which are difficult to model and cannot be directly measured, contribute to the formation and growth of cracks due to primary water stress corrosion cracking. Additionally, the uncertainty in weld residual stress measurements and modeling predictions is not well understood, further complicating the prediction of crack evolution. The purpose of this document is to develop methodology to quantify the uncertainty associated with weld residual stress that can be applied to modeling predictions and experimental measurements. Ultimately, the results can be used to assess the current state of uncertainty and to build confidence in both modeling and experimental procedures. The methodology consists of statistically modeling the variation in the weld residual stress profiles using functional data analysis techniques. Uncertainty is quantified using statistical bounds (e.g. confidence and tolerance bounds) constructed with a semi-parametric bootstrap procedure. Such bounds describe the range in which quantities of interest, such as means, are expected to lie as evidenced by the data. The methodology is extended to provide direct comparisons between experimental measurements and modeling predictions by constructing statistical confidence bounds for the average difference between the two quantities. The statistical bounds on the average difference can be used to assess the level of agreement between measurements and predictions. The methodology is applied to experimental measurements of residual stress obtained using two strain relief measurement methods and predictions from seven finite element models developed by different organizations during a round robin study.

  3. Tolerance for uncertainty in elderly people

    Directory of Open Access Journals (Sweden)

    KHRYSTYNA KACHMARYK

    2014-09-01

    Full Text Available The aim of the study. The aim of the paper is a comparison of tolerance to uncertainty in two groups of elderly: the students of the University of the Third Age (UTA and older people who are not enrolled but help to educate grandchildren. A relation to uncertainty was shown to influence on decision making strategy of elderly that indicates on importance of the researches. Methods. To obtain the objectives of the paper the following methods were used: 1 Personal change readiness survey (PCRS adapted by Nickolay Bazhanov and Galina Bardiyer; 2 Tolerance Ambiguity Scale (TAS adapted by Galina Soldatova; 3 Freiburg personality inventory (FPI and 4 The questionnaire of self-relation by Vladimir Stolin and Sergej Panteleev. 40 socially involved elderly people were investigated according the above methods, 20 from UTA and 20 who are not studied and served as control group. Results. It was shown that relations of tolerance to uncertainty in the study group of students of the University of the Third Age substantially differ from relations of tolerance to uncertainty in group of older people who do not learn. The majority of students of the University of the Third Age have an inherent low tolerance for uncertainty, which is associated with an increase in expression personality traits and characteristics in self-relation. The group of the elderly who are not enrolled increasingly shows tolerance of uncertainty, focusing on the social and trusting relationship to meet the needs of communication, and the ability to manage their own emotions and desires than a group of Third Age university students. Conclusions. The results of experimental research of the third age university student’s peculiarities of the tolerance to uncertainty were outlined. It was found that decision making in the ambiguity situations concerning social interaction is well developed in elderly who do not study. The students of the University of Third Age have greater needs in

  4. Managing Measurement Uncertainty in Building Acoustics

    Directory of Open Access Journals (Sweden)

    Chiara Scrosati

    2015-12-01

    Full Text Available In general, uncertainties should preferably be determined following the principles laid down in ISO/IEC Guide 98-3, the Guide to the expression of uncertainty in measurement (GUM:1995. According to current knowledge, it seems impossible to formulate these models for the different quantities in building acoustics. Therefore, the concepts of repeatability and reproducibility are necessary to determine the uncertainty of building acoustics measurements. This study shows the uncertainty of field measurements of a lightweight wall, a heavyweight floor, a façade with a single glazing window and a façade with double glazing window that were analyzed by a Round Robin Test (RRT, conducted in a full-scale experimental building at ITC-CNR (Construction Technologies Institute of the National Research Council of Italy. The single number quantities and their uncertainties were evaluated in both narrow and enlarged range and it was shown that including or excluding the low frequencies leads to very significant differences, except in the case of the sound insulation of façades with single glazing window. The results obtained in these RRTs were compared with other results from literature, which confirm the increase of the uncertainty of single number quantities due to the low frequencies extension. Having stated the measurement uncertainty for a single measurement, in building acoustics, it is also very important to deal with sampling for the purposes of classification of buildings or building units. Therefore, this study also shows an application of the sampling included in the Italian Standard on the acoustic classification of building units on a serial type building consisting of 47 building units. It was found that the greatest variability is observed in the façade and it depends on both the great variability of window’s typologies and on workmanship. Finally, it is suggested how to manage the uncertainty in building acoustics, both for one single

  5. PREMIUM - Benchmark on the quantification of the uncertainty of the physical models in the system thermal-hydraulic codes

    International Nuclear Information System (INIS)

    Skorek, Tomasz; Crecy, Agnes de

    2013-01-01

    PREMIUM (Post BEMUSE Reflood Models Input Uncertainty Methods) is an activity launched with the aim to push forward the methods of quantification of physical models uncertainties in thermal-hydraulic codes. It is endorsed by OECD/NEA/CSNI/WGAMA. The benchmark PREMIUM is addressed to all who applies uncertainty evaluation methods based on input uncertainties quantification and propagation. The benchmark is based on a selected case of uncertainty analysis application to the simulation of quench front propagation in an experimental test facility. Application to an experiment enables evaluation and confirmation of the quantified probability distribution functions on the basis of experimental data. The scope of the benchmark comprises a review of the existing methods, selection of potentially important uncertain input parameters, preliminary quantification of the ranges and distributions of the identified parameters, evaluation of the probability density function using experimental results of tests performed on FEBA test facility and confirmation/validation of the performed quantification on the basis of blind calculation of Reflood 2-D PERICLES experiment. (authors)

  6. Uncertainty in predictions of forest carbon dynamics: separating driver error from model error.

    Science.gov (United States)

    Spadavecchia, L; Williams, M; Law, B E

    2011-07-01

    We present an analysis of the relative magnitude and contribution of parameter and driver uncertainty to the confidence intervals on estimates of net carbon fluxes. Model parameters may be difficult or impractical to measure, while driver fields are rarely complete, with data gaps due to sensor failure and sparse observational networks. Parameters are generally derived through some optimization method, while driver fields may be interpolated from available data sources. For this study, we used data from a young ponderosa pine stand at Metolius, Central Oregon, and a simple daily model of coupled carbon and water fluxes (DALEC). An ensemble of acceptable parameterizations was generated using an ensemble Kalman filter and eddy covariance measurements of net C exchange. Geostatistical simulations generated an ensemble of meteorological driving variables for the site, consistent with the spatiotemporal autocorrelations inherent in the observational data from 13 local weather stations. Simulated meteorological data were propagated through the model to derive the uncertainty on the CO2 flux resultant from driver uncertainty typical of spatially extensive modeling studies. Furthermore, the model uncertainty was partitioned between temperature and precipitation. With at least one meteorological station within 25 km of the study site, driver uncertainty was relatively small ( 10% of the total net flux), while parameterization uncertainty was larger, 50% of the total net flux. The largest source of driver uncertainty was due to temperature (8% of the total flux). The combined effect of parameter and driver uncertainty was 57% of the total net flux. However, when the nearest meteorological station was > 100 km from the study site, uncertainty in net ecosystem exchange (NEE) predictions introduced by meteorological drivers increased by 88%. Precipitation estimates were a larger source of bias in NEE estimates than were temperature estimates, although the biases partly

  7. Optimization of FRAP uncertainty analysis option

    International Nuclear Information System (INIS)

    Peck, S.O.

    1979-10-01

    The automated uncertainty analysis option that has been incorporated in the FRAP codes (FRAP-T5 and FRAPCON-2) provides the user with a means of obtaining uncertainty bands on code predicted variables at user-selected times during a fuel pin analysis. These uncertainty bands are obtained by multiple single fuel pin analyses to generate data which can then be analyzed by second order statistical error propagation techniques. In this process, a considerable amount of data is generated and stored on tape. The user has certain choices to make regarding which independent variables are to be used in the analysis and what order of error propagation equation should be used in modeling the output response. To aid the user in these decisions, a computer program, ANALYZ, has been written and added to the uncertainty analysis option package. A variety of considerations involved in fitting response surface equations and certain pit-falls of which the user should be aware are discussed. An equation is derived expressing a residual as a function of a fitted model and an assumed true model. A variety of experimental design choices are discussed, including the advantages and disadvantages of each approach. Finally, a description of the subcodes which constitute program ANALYZ is provided

  8. Uncertainties in model-independent extractions of amplitudes from complete experiments

    International Nuclear Information System (INIS)

    Hoblit, S.; Sandorfi, A.M.; Kamano, H.; Lee, T.-S.H.

    2012-01-01

    A new generation of over-complete experiments is underway, with the goal of performing a high precision extraction of pseudoscalar meson photo-production amplitudes. Such experimentally determined amplitudes can be used both as a test to validate models and as a starting point for an analytic continuation in the complex plane to search for poles. Of crucial importance for both is the level of uncertainty in the extracted multipoles. We have probed these uncertainties by analyses of pseudo-data for KLambda photoproduction, first for the set of 8 observables that have been published for the K + Lambda channel and then for pseudo-data on a complete set of 16 observables with the uncertainties expected from analyses of ongoing CLAS experiments. In fitting multipoles, we have used a combined Monte Carlo sampling of the amplitude space, with gradient minimization, and have found a shallow X 2 valley pitted with a large number of local minima. This results in bands of solutions that are experimentally indistinguishable. All ongoing experiments will measure observables with limited statistics. We have found a dependence on the particular random choice of values of Gaussian distributed pseudo-data, due to the presence of multiple local minima. This results in actual uncertainties for reconstructed multipoles that are often considerable larger than those returned by gradient minimization routines such as Minuit which find a single local minimum. As intuitively expected, this additional level of uncertainty decreases as larger numbers of observables are included.

  9. Estimation of uncertainty in pKa values determined by potentiometric titration.

    Science.gov (United States)

    Koort, Eve; Herodes, Koit; Pihl, Viljar; Leito, Ivo

    2004-06-01

    A procedure is presented for estimation of uncertainty in measurement of the pK(a) of a weak acid by potentiometric titration. The procedure is based on the ISO GUM. The core of the procedure is a mathematical model that involves 40 input parameters. A novel approach is used for taking into account the purity of the acid, the impurities are not treated as inert compounds only, their possible acidic dissociation is also taken into account. Application to an example of practical pK(a) determination is presented. Altogether 67 different sources of uncertainty are identified and quantified within the example. The relative importance of different uncertainty sources is discussed. The most important source of uncertainty (with the experimental set-up of the example) is the uncertainty of pH measurement followed by the accuracy of the burette and the uncertainty of weighing. The procedure gives uncertainty separately for each point of the titration curve. The uncertainty depends on the amount of titrant added, being lowest in the central part of the titration curve. The possibilities of reducing the uncertainty and interpreting the drift of the pK(a) values obtained from the same curve are discussed.

  10. Uncertainty analysis of the SWEPP PAN assay system for glass waste (content codes 440, 441 and 442)

    International Nuclear Information System (INIS)

    Blackwood, L.G.; Harker, Y.D.; Meachum, T.R.; Yoon, W.Y.

    1996-10-01

    INEL is being used as a temporary storage facility for transuranic waste generated by the Nuclear Weapons program at the Rocky Flats Plant. Currently, there is a large effort in progress to prepare to ship this waste to WIPP. In order to meet the TRU Waste Characterization Quality Assurance Program Plan nondestructive assay compliance requirements and quality assurance objectives, it is necessary to determine the total uncertainty of the radioassay results produced by the Stored Waste Examination Pilot Plant (SWEPP) Passive Action Neutron (PAN) radioassay system. This paper discusses a modified statistical sampling and verification approach used to determine the total uncertainty of SWEPP PAN measurements for glass waste (content codes 440, 441, and 442) contained in 208 liter drums. In the modified statistical sampling and verification approach, the total performance of the SWEPP PAN nondestructive assay system for specifically selected waste conditions is simulated using computer models. A set of 100 cases covering the known conditions exhibited in glass waste was compiled using a combined statistical sampling and factorial experimental design approach. Parameter values assigned in each simulation were derived from reviews of approximately 100 real-time radiography video tapes of RFP glass waste drums, results from previous SWEPP PAN measurements on glass waste drums, and shipping data from RFP where the glass waste was generated. The data in the 100 selected cases form the multi-parameter input to the simulation model. The reported plutonium masses from the simulation model are compared with corresponding input masses. From these comparisons, the bias and total uncertainty associated with SWEPP PAN measurements on glass waste drums are estimated. The validity of the simulation approach is verified by comparing simulated output against results from calibration measurements using known plutonium sources and two glass waste calibration drums

  11. Uncertainty analysis of the SWEPP PAN assay system for glass waste (content codes 440, 441 and 442)

    Energy Technology Data Exchange (ETDEWEB)

    Blackwood, L.G.; Harker, Y.D.; Meachum, T.R.; Yoon, W.Y.

    1996-10-01

    INEL is being used as a temporary storage facility for transuranic waste generated by the Nuclear Weapons program at the Rocky Flats Plant. Currently, there is a large effort in progress to prepare to ship this waste to WIPP. In order to meet the TRU Waste Characterization Quality Assurance Program Plan nondestructive assay compliance requirements and quality assurance objectives, it is necessary to determine the total uncertainty of the radioassay results produced by the Stored Waste Examination Pilot Plant (SWEPP) Passive Action Neutron (PAN) radioassay system. This paper discusses a modified statistical sampling and verification approach used to determine the total uncertainty of SWEPP PAN measurements for glass waste (content codes 440, 441, and 442) contained in 208 liter drums. In the modified statistical sampling and verification approach, the total performance of the SWEPP PAN nondestructive assay system for specifically selected waste conditions is simulated using computer models. A set of 100 cases covering the known conditions exhibited in glass waste was compiled using a combined statistical sampling and factorial experimental design approach. Parameter values assigned in each simulation were derived from reviews of approximately 100 real-time radiography video tapes of RFP glass waste drums, results from previous SWEPP PAN measurements on glass waste drums, and shipping data from RFP where the glass waste was generated. The data in the 100 selected cases form the multi-parameter input to the simulation model. The reported plutonium masses from the simulation model are compared with corresponding input masses. From these comparisons, the bias and total uncertainty associated with SWEPP PAN measurements on glass waste drums are estimated. The validity of the simulation approach is verified by comparing simulated output against results from calibration measurements using known plutonium sources and two glass waste calibration drums.

  12. Sampling-based nuclear data uncertainty quantification for continuous energy Monte-Carlo codes

    International Nuclear Information System (INIS)

    Zhu, T.

    2015-01-01

    nuclear data uncertainty format. The first stage of NUSS development focuses on applying simple random sampling (SRS) algorithm for uncertainty quantification. The effect of combining multigroup and ACE format on the propagated nuclear data uncertainties is assessed. It is found that the number of energy groups has minor impact on the precision of κ_e_f_f uncertainty as long as the group structure reflects the neutron flux spectrum. Successful verification of the NUSS tool for propagating nuclear data uncertainties through MCNPX and quantifying MCNPX output parameter uncertainties is obtained. The second stage of NUSS development is motivated by the need for an efficient sensitivity analysis methodology based on global sampling and coupled with MCNPX. For complex systems, the computing time for obtaining a breakdown of total uncertainty contributions by individual inputs becomes prohibitive when many MCNPX runs are required. The capability of determining simultaneously the total uncertainty and individual nuclear data uncertainty contributions is thus researched and implemented into the NUSS-RF tool. It is based on the Random Balance Design algorithm and is validated by three mathematical test cases for both linear and nonlinear models and correlated inputs. NUSS-RF is then applied to demonstrate the efficient decomposition of total uncertainty by individual nuclear data. However an attempt to decompose total uncertainty into individual contributions using the conventional S/U method shows different decomposition results when the inputs are correlated. The investigation and findings of this PhD work are valuable because of the introduction of global sensitivity analysis into the existing repertoire of nuclear data uncertainty quantification methods. The NUSS tool is expected to be useful for expanding the types of MCNPX-related applications, such as an upgrade to the current PSI criticality safety assessment methodology for Swiss application, for which nuclear data

  13. Sampling-based nuclear data uncertainty quantification for continuous energy Monte-Carlo codes

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, T.

    2015-07-01

    nuclear data uncertainty format. The first stage of NUSS development focuses on applying simple random sampling (SRS) algorithm for uncertainty quantification. The effect of combining multigroup and ACE format on the propagated nuclear data uncertainties is assessed. It is found that the number of energy groups has minor impact on the precision of κ{sub eff} uncertainty as long as the group structure reflects the neutron flux spectrum. Successful verification of the NUSS tool for propagating nuclear data uncertainties through MCNPX and quantifying MCNPX output parameter uncertainties is obtained. The second stage of NUSS development is motivated by the need for an efficient sensitivity analysis methodology based on global sampling and coupled with MCNPX. For complex systems, the computing time for obtaining a breakdown of total uncertainty contributions by individual inputs becomes prohibitive when many MCNPX runs are required. The capability of determining simultaneously the total uncertainty and individual nuclear data uncertainty contributions is thus researched and implemented into the NUSS-RF tool. It is based on the Random Balance Design algorithm and is validated by three mathematical test cases for both linear and nonlinear models and correlated inputs. NUSS-RF is then applied to demonstrate the efficient decomposition of total uncertainty by individual nuclear data. However an attempt to decompose total uncertainty into individual contributions using the conventional S/U method shows different decomposition results when the inputs are correlated. The investigation and findings of this PhD work are valuable because of the introduction of global sensitivity analysis into the existing repertoire of nuclear data uncertainty quantification methods. The NUSS tool is expected to be useful for expanding the types of MCNPX-related applications, such as an upgrade to the current PSI criticality safety assessment methodology for Swiss application, for which nuclear data

  14. Elevated lip liner positions improving stability in total hip arthroplasty. An experimental study.

    Directory of Open Access Journals (Sweden)

    Suleman Qurashi

    2018-01-01

    Full Text Available Background: The use of elevated lip polyethylene liners with the acetabular component is relatively common in Total Hip Arthroplasty (THA. Elevated lip liners increase stability of the THA by increasing the jump distance in one direction. However, the elevated lip, conversely, also reduces the primary arc in the opposite direction and leads to early impingement of the neck on the elevated lip, potentially causing instability. The aim of the present study is to determine the total range of motion of the femoral head component within the acetabular component with the elevated lip liner in different orientations within the acetabular cup. Methods: We introduce a novel experimental (ex-vivo framework for studying the effects lip liner orientation on the range of motion of the femoral component. For constant acetabular cup orientation, the elevated lip liner was positioned superiorly and inferiorly. The femoral component range of motion in the coronal, sagittal and axial plane was measured. To avoid any confounding influences of out of plane motion, the femoral component was constrained to move in the tested plane. Results: This experimental set up introduces a rigorous framework in which to test the effects of elevated lip liner orientations on the range of motion of the femoral head component in abduction, adduction, flexion, extension and rotation. The movements of this experimental set-up are directly informative of patient’s maximum potential post-operative range of motion. Initial results show that an inferior placement of the elevated lip increases the effective superior lateral range of motion (abduction for the femoral component, whilst the anatomy of the patient (i.e. their other leg prevents the point of femoral component – acetabular lip impingement being reached (in adduction.

  15. Resolving Key Uncertainties in Subsurface Energy Recovery: One Role of In Situ Experimentation and URLs (Invited)

    Science.gov (United States)

    Elsworth, D.

    2013-12-01

    Significant uncertainties remain and influence the recovery of energy from the subsurface. These uncertainties include the fate and transport of long-lived radioactive wastes that result from the generation of nuclear power and have been the focus of an active network of international underground research laboratories dating back at least 35 years. However, other nascent carbon-free energy technologies including conventional and EGS geothermal methods, carbon-neutral methods such as carbon capture and sequestration and the utilization of reduced-carbon resources such as unconventional gas reservoirs offer significant challenges in their effective deployment. We illustrate the important role that in situ experiments may play in resolving behaviors at extended length- and time-scales for issues related to chemical-mechanical interactions. Significantly, these include the evolution of transport and mechanical characteristics of stress-sensitive fractured media and their influence of the long-term behavior of the system. Importantly, these interests typically relate to either creating reservoirs (hydroshearing in EGS reservoirs, artificial fractures in shales and coals) or maintaining seals at depth where the permeating fluids may include mixed brines, CO2, methane and other hydrocarbons. Critical questions relate to the interaction of these various fluid mixtures and compositions with the fractured substrate. Important needs are in understanding the roles of key processes (transmission, dissolution, precipitation, sorption and dynamic stressing) on the modification of effective stresses and their influence on the evolution of permeability, strength and induced seismicity on the resulting development of either wanted or unwanted fluid pathways. In situ experimentation has already contributed to addressing some crucial issues of these complex interactions at field scale. Important contributions are noted in understanding the fate and transport of long-lived wastes

  16. Development and optimization of neutron measurement methods by fission chamber on experimental reactors - management, treatment and reduction of uncertainties

    International Nuclear Information System (INIS)

    Blanc-De-Lanaute, N.

    2012-01-01

    The main objectives of this research thesis are the management and reduction of uncertainties associated with measurements performed by means of a fission-chamber type sensor. The author first recalls the role of experimental reactors in nuclear research, presents the various sensors used in nuclear detection (photographic film, scintillation sensor, gas ionization sensor, semiconducting sensor, other types of radiation sensors), and more particularly addresses neutron detection (activation sensor, gas filling sensor). In a second part, the author gives an overview of the state of the art of neutron measurement by fission chamber in a mock-up reactor (signal formation, processing and post-processing, associated measurements and uncertainties, return on experience of measurements by fission chamber on Masurca and Minerve research reactors). In a third part, he reports the optimization of two intrinsic parameters of this sensor: the thickness of fissile material deposit, and the pressure and nature of the filler gas. The fourth part addresses the improvement of measurement electronics and of post-processing methods which are used for result analysis. The fifth part deals with the optimization of spectrum index measurements by means of a fission chamber. The impact of each parameter is quantified. Results explain some inconsistencies noticed in measurements performed on the Minerve reactor in 2004, and allow the improvement of biases with computed values [fr

  17. Examples of measurement uncertainty evaluations in accordance with the revised GUM

    Science.gov (United States)

    Runje, B.; Horvatic, A.; Alar, V.; Medic, S.; Bosnjakovic, A.

    2016-11-01

    The paper presents examples of the evaluation of uncertainty components in accordance with the current and revised Guide to the expression of uncertainty in measurement (GUM). In accordance with the proposed revision of the GUM a Bayesian approach was conducted for both type A and type B evaluations.The law of propagation of uncertainty (LPU) and the law of propagation of distribution applied through the Monte Carlo method, (MCM) were used to evaluate associated standard uncertainties, expanded uncertainties and coverage intervals. Furthermore, the influence of the non-Gaussian dominant input quantity and asymmetric distribution of the output quantity y on the evaluation of measurement uncertainty was analyzed. In the case when the probabilistically coverage interval is not symmetric, the coverage interval for the probability P is estimated from the experimental probability density function using the Monte Carlo method. Key highlights of the proposed revision of the GUM were analyzed through a set of examples.

  18. Status of uncertainty assessment in k0-NAA measurement. Anything still missing?

    International Nuclear Information System (INIS)

    Borut Smodis; Tinkara Bucar

    2014-01-01

    Several approaches to quantifying measurement uncertainty in k 0 -based neutron activation analysis (k 0 -NAA) are reviewed, comprising the original approach, the spreadsheet approach, the dedicated computer program involving analytical calculations and the two k 0 -NAA programs available on the market. Two imperfectness in the dedicated programs are identified, their impact assessed and possible improvements presented for a concrete experimental situation. The status of uncertainty assessment in k 0 -NAA is discussed and steps for improvement are recommended. It is concluded that the present magnitude of measurement uncertainty should further be improved by making additional efforts in reducing uncertainties of the relevant nuclear constants used. (author)

  19. The analysis and evaluation by the method of reduction of total photoneutron reaction cross sections in the range of giant dipole resonance

    International Nuclear Information System (INIS)

    Varlamov, V.V.; Efimkin, N.G.; Ishkhanov, B.S.; Sapunenko, V.V.; Stepanov, M.E.

    1993-01-01

    The method based on the method of reduction is proposed for the evaluation of photonuclear reaction cross sections have been obtained at significant systematic uncertainties (different apparatus functions, calibration and normalization uncertainties). The evaluation method consists of using the real apparatus function (photon spectrum) of each individual experiment to reduce the data to a representation generated by an apparatus function of better quality. The task is to find the most reasonably achievable monoenergetic representation (MRAMR) of the information about cross section contained in different experiment observables and to take into account the experimental uncertainties of calibration and normalization procedures. The method was used to obtain the evaluated total photoneutron (γ, xn) reaction cross sections for 16 O, 28 Si, nat Cu, 141 Pr, and 208 Pb are presented. 79 refs., 19 figs., 6 tabs

  20. Understanding uncertainty

    CERN Document Server

    Lindley, Dennis V

    2013-01-01

    Praise for the First Edition ""...a reference for everyone who is interested in knowing and handling uncertainty.""-Journal of Applied Statistics The critically acclaimed First Edition of Understanding Uncertainty provided a study of uncertainty addressed to scholars in all fields, showing that uncertainty could be measured by probability, and that probability obeyed three basic rules that enabled uncertainty to be handled sensibly in everyday life. These ideas were extended to embrace the scientific method and to show how decisions, containing an uncertain element, could be rationally made.

  1. Communicating uncertainties in earth sciences in view of user needs

    Science.gov (United States)

    de Vries, Wim; Kros, Hans; Heuvelink, Gerard

    2014-05-01

    Uncertainties are inevitable in all results obtained in the earth sciences, regardless whether these are based on field observations, experimental research or predictive modelling. When informing decision and policy makers or stakeholders, it is important that these uncertainties are also communicated. In communicating results, it important to apply a "Progressive Disclosure of Information (PDI)" from non-technical information through more specialised information, according to the user needs. Generalized information is generally directed towards non-scientific audiences and intended for policy advice. Decision makers have to be aware of the implications of the uncertainty associated with results, so that they can account for it in their decisions. Detailed information on the uncertainties is generally intended for scientific audiences to give insight in underlying approaches and results. When communicating uncertainties, it is important to distinguish between scientific results that allow presentation in terms of probabilistic measures of uncertainty and more intrinsic uncertainties and errors that cannot be expressed in mathematical terms. Examples of earth science research that allow probabilistic measures of uncertainty, involving sophisticated statistical methods, are uncertainties in spatial and/or temporal variations in results of: • Observations, such as soil properties measured at sampling locations. In this case, the interpolation uncertainty, caused by a lack of data collected in space, can be quantified by e.g. kriging standard deviation maps or animations of conditional simulations. • Experimental measurements, comparing impacts of treatments at different sites and/or under different conditions. In this case, an indication of the average and range in measured responses to treatments can be obtained from a meta-analysis, summarizing experimental findings between replicates and across studies, sites, ecosystems, etc. • Model predictions due to

  2. Bayesian Chance-Constrained Hydraulic Barrier Design under Geological Structure Uncertainty.

    Science.gov (United States)

    Chitsazan, Nima; Pham, Hai V; Tsai, Frank T-C

    2015-01-01

    The groundwater community has widely recognized geological structure uncertainty as a major source of model structure uncertainty. Previous studies in aquifer remediation design, however, rarely discuss the impact of geological structure uncertainty. This study combines chance-constrained (CC) programming with Bayesian model averaging (BMA) as a BMA-CC framework to assess the impact of geological structure uncertainty in remediation design. To pursue this goal, the BMA-CC method is compared with traditional CC programming that only considers model parameter uncertainty. The BMA-CC method is employed to design a hydraulic barrier to protect public supply wells of the Government St. pump station from salt water intrusion in the "1500-foot" sand and the "1700-foot" sand of the Baton Rouge area, southeastern Louisiana. To address geological structure uncertainty, three groundwater models based on three different hydrostratigraphic architectures are developed. The results show that using traditional CC programming overestimates design reliability. The results also show that at least five additional connector wells are needed to achieve more than 90% design reliability level. The total amount of injected water from the connector wells is higher than the total pumpage of the protected public supply wells. While reducing the injection rate can be achieved by reducing the reliability level, the study finds that the hydraulic barrier design to protect the Government St. pump station may not be economically attractive. © 2014, National Ground Water Association.

  3. The role of uncertainty analysis in dose reconstruction and risk assessment

    International Nuclear Information System (INIS)

    Hoffman, F.O.; Simon, S.L.; Thiessen. K.M.

    1996-01-01

    Dose reconstruction and risk assessment rely heavily on the use of mathematical models to extrapolate information beyond the realm of direct observation. Because models are merely approximations of real systems, their predictions are inherently uncertain. As a result, full disclosure of uncertainty in dose and risk estimates is essential to achieve scientific credibility and to build public trust. The need for formal analysis of uncertainty in model predictions was presented during the nineteenth annual meeting of the NCRP. At that time, quantitative uncertainty analysis was considered a relatively new and difficult subject practiced by only a few investigators. Today, uncertainty analysis has become synonymous with the assessment process itself. When an uncertainty analysis is used iteratively within the assessment process, it can guide experimental research to refine dose and risk estimates, deferring potentially high cost or high consequence decisions until uncertainty is either acceptable or irreducible. Uncertainty analysis is now mandated for all ongoing dose reconstruction projects within the United States, a fact that distinguishes dose reconstruction from other types of exposure and risk assessments. 64 refs., 6 figs., 1 tab

  4. Nuclear data uncertainties for local power densities in the Martin-Hoogenboom benchmark

    International Nuclear Information System (INIS)

    Van der Marck, S.C.; Rochman, D.A.

    2013-01-01

    The recently developed method of fast Total Monte Carlo to propagate nuclear data uncertainties was applied to the Martin-Hoogenboom benchmark. This Martin- Hoogenboom benchmark prescribes that one calculates local pin powers (of light water cooled reactor) with a statistical uncertainty lower than 1% everywhere. Here we report, for the first time, an estimate of the nuclear data uncertainties for these local pin powers. For each of the more than 6 million local power tallies, the uncertainty due to nuclear data uncertainties was calculated, based on random variation of data for 235 U, 238 U, 239 Pu and H in H 2 O thermal scattering. In the center of the core region, the nuclear data uncertainty is 0.9%. Towards the edges of the core, this uncertainty increases to roughly 3%. The nuclear data uncertainties have been shown to be larger than the statistical uncertainties that the benchmark prescribes

  5. Centralizing Data Management with Considerations of Uncertainty and Information-Based Flexibility

    OpenAIRE

    Velu, Chander K.; Madnick, Stuart E.; Van Alstyne, Marshall W.

    2013-01-01

    This paper applies the theory of real options to analyze how the value of information-based flexibility should affect the decision to centralize or decentralize data management under low and high uncertainty. This study makes two main contributions. First, we show that in the presence of low uncertainty, centralization of data management decisions creates more total surplus for the firm as the similarity of business units increases. In contrast, in the presence of high uncertainty, centraliza...

  6. Account of the uncertainty factor in forecasting nuclear power development

    International Nuclear Information System (INIS)

    Chernavskij, S.Ya.

    1979-01-01

    Minimization of total discounted costs for linear constraints is commonly used in forecasting nuclear energy growth. This approach is considered inadequate due to the uncertainty of exogenous variables of the model. A method of forecasting that takes into account the presence of uncertainty is elaborated. An example that demonstrates the expediency of the method and its advantage over the conventional approximation method used for taking uncertainty into account is given. In the framework of the example, the optimal strategy for nuclear energy growth over period of 500 years is determined

  7. Measurement uncertainty analysis techniques applied to PV performance measurements

    International Nuclear Information System (INIS)

    Wells, C.

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results

  8. Measurement uncertainty analysis techniques applied to PV performance measurements

    Energy Technology Data Exchange (ETDEWEB)

    Wells, C.

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

  9. Measurement uncertainty analysis techniques applied to PV performance measurements

    Energy Technology Data Exchange (ETDEWEB)

    Wells, C

    1992-10-01

    The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment`s final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

  10. An uncertainty inventory demonstration - a primary step in uncertainty quantification

    Energy Technology Data Exchange (ETDEWEB)

    Langenbrunner, James R. [Los Alamos National Laboratory; Booker, Jane M [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Salazar, Issac F [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2009-01-01

    Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.

  11. Maximizing probable oil field profit: uncertainties on well spacing

    International Nuclear Information System (INIS)

    MacKay, J.A.; Lerche, I.

    1997-01-01

    The influence of uncertainties in field development costs, well costs, lifting costs, selling price, discount factor, and oil field reserves are evaluated for their impact on assessing probable ranges of uncertainty on present day worth (PDW), oil field lifetime τ 2/3 , optimum number of wells (OWI), and the minimum (n-) and maximum (n+) number of wells to produce a PDW ≥ O. The relative importance of different factors in contributing to the uncertainties in PDW, τ 2/3 , OWI, nsub(-) and nsub(+) is also analyzed. Numerical illustrations indicate how the maximum PDW depends on the ranges of parameter values, drawn from probability distributions using Monte Carlo simulations. In addition, the procedure illustrates the relative importance of contributions of individual factors to the total uncertainty, so that one can assess where to place effort to improve ranges of uncertainty; while the volatility of each estimate allows one to determine when such effort is needful. (author)

  12. Recognizing and responding to uncertainty: a grounded theory of nurses' uncertainty.

    Science.gov (United States)

    Cranley, Lisa A; Doran, Diane M; Tourangeau, Ann E; Kushniruk, Andre; Nagle, Lynn

    2012-08-01

    There has been little research to date exploring nurses' uncertainty in their practice. Understanding nurses' uncertainty is important because it has potential implications for how care is delivered. The purpose of this study is to develop a substantive theory to explain how staff nurses experience and respond to uncertainty in their practice. Between 2006 and 2008, a grounded theory study was conducted that included in-depth semi-structured interviews. Fourteen staff nurses working in adult medical-surgical intensive care units at two teaching hospitals in Ontario, Canada, participated in the study. The theory recognizing and responding to uncertainty characterizes the processes through which nurses' uncertainty manifested and how it was managed. Recognizing uncertainty involved the processes of assessing, reflecting, questioning, and/or being unable to predict aspects of the patient situation. Nurses' responses to uncertainty highlighted the cognitive-affective strategies used to manage uncertainty. Study findings highlight the importance of acknowledging uncertainty and having collegial support to manage uncertainty. The theory adds to our understanding the processes involved in recognizing uncertainty, strategies and outcomes of managing uncertainty, and influencing factors. Tailored nursing education programs should be developed to assist nurses in developing skills in articulating and managing their uncertainty. Further research is needed to extend, test and refine the theory of recognizing and responding to uncertainty to develop strategies for managing uncertainty. This theory advances the nursing perspective of uncertainty in clinical practice. The theory is relevant to nurses who are faced with uncertainty and complex clinical decisions, to managers who support nurses in their clinical decision-making, and to researchers who investigate ways to improve decision-making and care delivery. ©2012 Sigma Theta Tau International.

  13. ESFR core optimization and uncertainty studies

    International Nuclear Information System (INIS)

    Rineiski, A.; Vezzoni, B.; Zhang, D.; Marchetti, M.; Gabrielli, F.; Maschek, W.; Chen, X.-N.; Buiron, L.; Krepel, J.; Sun, K.; Mikityuk, K.; Polidoro, F.; Rochman, D.; Koning, A.J.; DaCruz, D.F.; Tsige-Tamirat, H.; Sunderland, R.

    2015-01-01

    In the European Sodium Fast Reactor (ESFR) project supported by EURATOM in 2008-2012, a concept for a large 3600 MWth sodium-cooled fast reactor design was investigated. In particular, reference core designs with oxide and carbide fuel were optimized to improve their safety parameters. Uncertainties in these parameters were evaluated for the oxide option. Core modifications were performed first to reduce the sodium void reactivity effect. Introduction of a large sodium plenum with an absorber layer above the core and a lower axial fertile blanket improve the total sodium void effect appreciably, bringing it close to zero for a core with fresh fuel, in line with results obtained worldwide, while not influencing substantially other core physics parameters. Therefore an optimized configuration, CONF2, with a sodium plenum and a lower blanket was established first and used as a basis for further studies in view of deterioration of safety parameters during reactor operation. Further options to study were an inner fertile blanket, introduction of moderator pins, a smaller core height, special designs for pins, such as 'empty' pins, and subassemblies. These special designs were proposed to facilitate melted fuel relocation in order to avoid core re-criticality under severe accident conditions. In the paper further CONF2 modifications are compared in terms of safety and fuel balance. They may bring further improvements in safety, but their accurate assessment requires additional studies, including transient analyses. Uncertainty studies were performed by employing a so-called Total Monte-Carlo method, for which a large number of nuclear data files is produced for single isotopes and then used in Monte-Carlo calculations. The uncertainties for the criticality, sodium void and Doppler effects, effective delayed neutron fraction due to uncertainties in basic nuclear data were assessed for an ESFR core. They prove applicability of the available nuclear data for ESFR

  14. Responses to clinical uncertainty in Australian general practice trainees: a cross-sectional analysis.

    Science.gov (United States)

    Cooke, Georga; Tapley, Amanda; Holliday, Elizabeth; Morgan, Simon; Henderson, Kim; Ball, Jean; van Driel, Mieke; Spike, Neil; Kerr, Rohan; Magin, Parker

    2017-12-01

    Tolerance for ambiguity is essential for optimal learning and professional competence. General practice trainees must be, or must learn to be, adept at managing clinical uncertainty. However, few studies have examined associations of intolerance of uncertainty in this group. The aim of this study was to establish levels of tolerance of uncertainty in Australian general practice trainees and associations of uncertainty with demographic, educational and training practice factors. A cross-sectional analysis was performed on the Registrar Clinical Encounters in Training (ReCEnT) project, an ongoing multi-site cohort study. Scores on three of the four independent subscales of the Physicians' Reaction to Uncertainty (PRU) instrument were analysed as outcome variables in linear regression models with trainee and practice factors as independent variables. A total of 594 trainees contributed data on a total of 1209 occasions. Trainees in earlier training terms had higher scores for 'Anxiety due to uncertainty', 'Concern about bad outcomes' and 'Reluctance to disclose diagnosis/treatment uncertainty to patients'. Beyond this, findings suggest two distinct sets of associations regarding reaction to uncertainty. Firstly, affective aspects of uncertainty (the 'Anxiety' and 'Concern' subscales) were associated with female gender, less experience in hospital prior to commencing general practice training, and graduation overseas. Secondly, a maladaptive response to uncertainty (the 'Reluctance to disclose' subscale) was associated with urban practice, health qualifications prior to studying medicine, practice in an area of higher socio-economic status, and being Australian-trained. This study has established levels of three measures of trainees' responses to uncertainty and associations with these responses. The current findings suggest differing 'phenotypes' of trainees with high 'affective' responses to uncertainty and those reluctant to disclose uncertainty to patients. More

  15. Uncertainty Analysis of the NASA Glenn 8x6 Supersonic Wind Tunnel

    Science.gov (United States)

    Stephens, Julia; Hubbard, Erin; Walter, Joel; McElroy, Tyler

    2016-01-01

    This paper presents methods and results of a detailed measurement uncertainty analysis that was performed for the 8- by 6-foot Supersonic Wind Tunnel located at the NASA Glenn Research Center. The statistical methods and engineering judgments used to estimate elemental uncertainties are described. The Monte Carlo method of propagating uncertainty was selected to determine the uncertainty of calculated variables of interest. A detailed description of the Monte Carlo method as applied for this analysis is provided. Detailed uncertainty results for the uncertainty in average free stream Mach number as well as other variables of interest are provided. All results are presented as random (variation in observed values about a true value), systematic (potential offset between observed and true value), and total (random and systematic combined) uncertainty. The largest sources contributing to uncertainty are determined and potential improvement opportunities for the facility are investigated.

  16. Sensitivity/uncertainty analysis for free-in-air tissue kerma due to initial radiation at Hiroshima and Nagasaki

    International Nuclear Information System (INIS)

    Lillie, R.A.; Broadhead, B.L.; Pace, J.V. III

    1988-01-01

    Uncertainty estimates and cross correlations by range/survivor have been calculated for the Hiroshima and Nagasaki free-in-air (FIA) tissue kerma obtained from two-dimensional air/ground transport calculations. The uncertainties due to modeling parameter and basic nuclear transport data uncertainties were calculated for 700-, 1000-, and 1500-m ground ranges. Only the FIA tissue kerma due to initial radiation was treated in the analysis; the uncertainties associated with terrain and building shielding and phantom attenuation were not considered in this study. Uncertainties of --20% were obtained for the prompt neutron and secondary gamma kerma and 30% for the prompt gamma kerma at both cities. The uncertainties on the total prompt kerma at Hiroshima and Nagasaki are --18 and 15%, respectively. The estimated uncertainties vary only slightly by ground range and are fairly highly correlated. The total prompt kerma uncertainties are dominated by the secondary gamma uncertainties, which in turn are dominated by the modeling parameter uncertainties, particularly those associated with the weapon yield and radiation sources

  17. Photoneutron cross sections for {sup 59}Co. Systematic uncertainties of data from various experiments

    Energy Technology Data Exchange (ETDEWEB)

    Varlamov, V.V. [Lomonosov Moscow State University, Skobeltsyn Institute of Nuclear Physics, Moscow (Russian Federation); Davydov, A.I. [Lomonosov Moscow State University, Physics Faculty, Moscow (Russian Federation); Ishkhanov, B.S. [Lomonosov Moscow State University, Skobeltsyn Institute of Nuclear Physics, Moscow (Russian Federation); Lomonosov Moscow State University, Physics Faculty, Moscow (Russian Federation)

    2017-09-15

    Data on partial photoneutron reaction cross sections (γ, 1n), (γ, 2n), and (γ, 3n) for {sup 59}Co obtained in two experiments carried out at Livermore (USA) were analyzed. The sources of radiation in both experiments were the monoenergetic photon beams from the annihilation in flight of relativistic positrons. The total yield was sorted by the neutron multiplicity, taking into account the difference in the neutron energy spectra for different multiplicity. The two quoted studies differ in the method of determining the neutron. Significant systematic disagreements between the results of the two experiments exist. They are considered to be caused by large systematic uncertainties in partial cross sections, since they do not satisfy physical criteria for reliability of the data. To obtain reliable cross sections of partial and total photoneutron reactions a new method combining experimental data and theoretical evaluation was used. It is based on the experimental neutron yield cross section which is rather independent of neutron multiplicity and the transitional neutron multiplicity functions of the combined photonucleon reaction model (CPNRM). The model transitional multiplicity functions were used for the decomposition of the neutron yield cross section into the contributions of partial reactions. The results of the new evaluation noticeably differ from the partial cross sections obtained in the two experimental studies are under discussion. (orig.)

  18. Improving the driver-automation interaction: an approach using automation uncertainty.

    Science.gov (United States)

    Beller, Johannes; Heesen, Matthias; Vollrath, Mark

    2013-12-01

    The aim of this study was to evaluate whether communicating automation uncertainty improves the driver-automation interaction. A false system understanding of infallibility may provoke automation misuse and can lead to severe consequences in case of automation failure. The presentation of automation uncertainty may prevent this false system understanding and, as was shown by previous studies, may have numerous benefits. Few studies, however, have clearly shown the potential of communicating uncertainty information in driving. The current study fills this gap. We conducted a driving simulator experiment, varying the presented uncertainty information between participants (no uncertainty information vs. uncertainty information) and the automation reliability (high vs.low) within participants. Participants interacted with a highly automated driving system while engaging in secondary tasks and were required to cooperate with the automation to drive safely. Quantile regressions and multilevel modeling showed that the presentation of uncertainty information increases the time to collision in the case of automation failure. Furthermore, the data indicated improved situation awareness and better knowledge of fallibility for the experimental group. Consequently, the automation with the uncertainty symbol received higher trust ratings and increased acceptance. The presentation of automation uncertaintythrough a symbol improves overall driver-automation cooperation. Most automated systems in driving could benefit from displaying reliability information. This display might improve the acceptance of fallible systems and further enhances driver-automation cooperation.

  19. A method for uncertainty quantification in the life prediction of gas turbine components

    Energy Technology Data Exchange (ETDEWEB)

    Lodeby, K.; Isaksson, O.; Jaervstraat, N. [Volvo Aero Corporation, Trolhaettan (Sweden)

    1998-12-31

    A failure in an aircraft jet engine can have severe consequences which cannot be accepted and high requirements are therefore raised on engine reliability. Consequently, assessment of the reliability of life predictions used in design and maintenance are important. To assess the validity of the predicted life a method to quantify the contribution to the total uncertainty in the life prediction from different uncertainty sources is developed. The method is a structured approach for uncertainty quantification that uses a generic description of the life prediction process. It is based on an approximate error propagation theory combined with a unified treatment of random and systematic errors. The result is an approximate statistical distribution for the predicted life. The method is applied on life predictions for three different jet engine components. The total uncertainty became of reasonable order of magnitude and a good qualitative picture of the distribution of the uncertainty contribution from the different sources was obtained. The relative importance of the uncertainty sources differs between the three components. It is also highly dependent on the methods and assumptions used in the life prediction. Advantages and disadvantages of this method is discussed. (orig.) 11 refs.

  20. A study on the uncertainty based on Meteorological fields on Source-receptor Relationships for Total Nitrate in the Northeast Asia

    Science.gov (United States)

    Sunwoo, Y.; Park, J.; Kim, S.; Ma, Y.; Chang, I.

    2010-12-01

    Northeast Asia hosts more than one third of world population and the emission of pollutants trends to increase rapidly, because of economic growth and the increase of the consumption in high energy intensity. In case of air pollutants, especially, its characteristics of emissions and transportation become issued nationally, in terms of not only environmental aspects, but also long-range transboundary transportation. In meteorological characteristics, westerlies area means what air pollutants that emitted from China can be delivered to South Korea. Therefore, considering meteorological factors can be important to understand air pollution phenomena. In this study, we used MM5(Fifth-Generation Mesoscale Model) and WRF(Weather Research and Forecasting Model) to produce the meteorological fields. We analyzed the feature of physics option in each model and the difference due to characteristic of WRF and MM5. We are trying to analyze the uncertainty of source-receptor relationships for total nitrate according to meteorological fields in the Northeast Asia. We produced the each meteorological fields that apply the same domain, same initial and boundary conditions, the best similar physics option. S-R relationships in terms of amount and fractional number for total nitrate (sum of N from HNO3, nitrate and PAN) were calculated by EMEP method 3.

  1. Two-dimensional cross-section and SED uncertainty analysis for the Fusion Engineering Device (FED)

    International Nuclear Information System (INIS)

    Embrechts, M.J.; Urban, W.T.; Dudziak, D.J.

    1982-01-01

    The theory of two-dimensional cross-section and secondary-energy-distribution (SED) sensitivity was implemented by developing a two-dimensional sensitivity and uncertainty analysis code, SENSIT-2D. Analyses of the Fusion Engineering Design (FED) conceptual inboard shield indicate that, although the calculated uncertainties in the 2-D model are of the same order of magnitude as those resulting from the 1-D model, there might be severe differences. The more complex the geometry, the more compulsory a 2-D analysis becomes. Specific results show that the uncertainty for the integral heating of the toroidal field (TF) coil for the FED is 114.6%. The main contributors to the cross-section uncertainty are chromium and iron. Contributions to the total uncertainty were smaller for nickel, copper, hydrogen and carbon. All analyses were performed with the Los Alamos 42-group cross-section library generated from ENDF/B-V data, and the COVFILS covariance matrix library. The large uncertainties due to chromium result mainly from large convariances for the chromium total and elastic scattering cross sections

  2. Estimation of the measurement uncertainty in magnetic resonance velocimetry based on statistical models

    Energy Technology Data Exchange (ETDEWEB)

    Bruschewski, Martin; Schiffer, Heinz-Peter [Technische Universitaet Darmstadt, Institute of Gas Turbines and Aerospace Propulsion, Darmstadt (Germany); Freudenhammer, Daniel [Technische Universitaet Darmstadt, Institute of Fluid Mechanics and Aerodynamics, Center of Smart Interfaces, Darmstadt (Germany); Buchenberg, Waltraud B. [University Medical Center Freiburg, Medical Physics, Department of Radiology, Freiburg (Germany); Grundmann, Sven [University of Rostock, Institute of Fluid Mechanics, Rostock (Germany)

    2016-05-15

    Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75% is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented. (orig.)

  3. Estimation of the measurement uncertainty in magnetic resonance velocimetry based on statistical models

    Science.gov (United States)

    Bruschewski, Martin; Freudenhammer, Daniel; Buchenberg, Waltraud B.; Schiffer, Heinz-Peter; Grundmann, Sven

    2016-05-01

    Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75 % is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented.

  4. Experimental programme and analysis, ZENITH II, Core 4

    Energy Technology Data Exchange (ETDEWEB)

    Ingram, G.; Sanders, J. E.; Sherwin, J.

    1974-10-15

    The Phase 3 program of reactor physics experiments on the HTR (or Mk 3 GCR) lattices continued during the first half of 1974 with a study of a series of critical builds in Zenith II aimed at testing predictions of shut-down margins in the local criticality-situations arising during power reactor refueling. The paper describes the experimental program and the subsequent theoretical analysis using methods developed in the United Kingdom for calculating low-enriched uranium HTR fuel systems. The importance of improving the accuracy of predictions of shut-down margins arises from the basic requirement that the core in its most reactive condition and with a specified number of absorbers removed from the array must remain sub-critical with a margin adequate to cover the total uncertainty of +/- 1 Nile (that is, 1 % delta-k). The major uncertainty is that in modelling the complex fuel/absorber configuration, and this is the aspect essentially covered in the Zenith II Core 4 studies.

  5. Total Measurement Uncertainty (TMU) for Nondestructive Assay of Transuranic (TRU) Waste at the WRAP Facility

    International Nuclear Information System (INIS)

    WILLS, C.E.

    1999-01-01

    This report examines the contributing factors to NDA measurement uncertainty at WRAP. The significance of each factor on the TMU is analyzed, and a final method is given for determining the TMU for NDA measurements at WRAP. As more data becomes available, and WRAP gains in operational experience, this report will be reviewed semi-annually and updated as necessary

  6. Handling uncertainty and networked structure in robot control

    CERN Document Server

    Tamás, Levente

    2015-01-01

    This book focuses on two challenges posed in robot control by the increasing adoption of robots in the everyday human environment: uncertainty and networked communication. Part I of the book describes learning control to address environmental uncertainty. Part II discusses state estimation, active sensing, and complex scenario perception to tackle sensing uncertainty. Part III completes the book with control of networked robots and multi-robot teams. Each chapter features in-depth technical coverage and case studies highlighting the applicability of the techniques, with real robots or in simulation. Platforms include mobile ground, aerial, and underwater robots, as well as humanoid robots and robot arms. Source code and experimental data are available at http://extras.springer.com. The text gathers contributions from academic and industry experts, and offers a valuable resource for researchers or graduate students in robot control and perception. It also benefits researchers in related areas, such as computer...

  7. Calculation of the uncertainty of HP (10) evaluation for a thermoluminescent dosimetry system

    International Nuclear Information System (INIS)

    Ferreira, M.S.; Silva, E.R.; Mauricio, C.L.P.

    2016-01-01

    Full interpretation of dose assessment only can be performed when the uncertainty of the measurement is known. The aim of this study is to calculate the uncertainty of the TL dosimetry system of the LDF/IRD for evaluation of H P (10) for photons. It has been done by experimental measurements, extraction of information from documents and calculation of uncertainties based on ISO GUM. Energy and angular dependence is the most important source to the combined u c (y) and expanded (U) uncertainty. For 10 mSv, it was obtained u c (y) = 1,99 mSv and U = 3,98 mSv for 95% of coverage interval. (author)

  8. Analysis of uncertainties in a individualized method of estimation activity of 131I for hyperthyroid patient

    International Nuclear Information System (INIS)

    Orellana Salas, A.; Melgar Perez, J.; Arrocha Acevedo, J. F.

    2013-01-01

    The determination of the activity to prescribe the hyperthyroid patients presented difficult consideration uncertainties. The uncertainties associated with the experimental design can exceed 20%, so it should be valued to customize activity therapy of 1 31 I. (Author)

  9. Statistical analysis of correlated experimental data and neutron cross section evaluation

    International Nuclear Information System (INIS)

    Badikov, S.A.

    1998-01-01

    The technique for evaluation of neutron cross sections on the basis of statistical analysis of correlated experimental data is presented. The most important stages of evaluation beginning from compilation of correlation matrix for measurement uncertainties till representation of the analysis results in the ENDF-6 format are described in details. Special attention is paid to restrictions (positive uncertainty) on covariation matrix of approximate parameters uncertainties generated within the method of least square fit which is derived from physical reasons. The requirements for source experimental data assuring satisfaction of the restrictions mentioned above are formulated. Correlation matrices of measurement uncertainties in particular should be also positively determined. Variants of modelling the positively determined correlation matrices of measurement uncertainties in a situation when their consequent calculation on the basis of experimental information is impossible are discussed. The technique described is used for creating the new generation of estimates of dosimetric reactions cross sections for the first version of the Russian dosimetric file (including nontrivial covariation information)

  10. Quantification of margins and uncertainties: Alternative representations of epistemic uncertainty

    International Nuclear Information System (INIS)

    Helton, Jon C.; Johnson, Jay D.

    2011-01-01

    In 2001, the National Nuclear Security Administration of the U.S. Department of Energy in conjunction with the national security laboratories (i.e., Los Alamos National Laboratory, Lawrence Livermore National Laboratory and Sandia National Laboratories) initiated development of a process designated Quantification of Margins and Uncertainties (QMU) for the use of risk assessment methodologies in the certification of the reliability and safety of the nation's nuclear weapons stockpile. A previous presentation, 'Quantification of Margins and Uncertainties: Conceptual and Computational Basis,' describes the basic ideas that underlie QMU and illustrates these ideas with two notional examples that employ probability for the representation of aleatory and epistemic uncertainty. The current presentation introduces and illustrates the use of interval analysis, possibility theory and evidence theory as alternatives to the use of probability theory for the representation of epistemic uncertainty in QMU-type analyses. The following topics are considered: the mathematical structure of alternative representations of uncertainty, alternative representations of epistemic uncertainty in QMU analyses involving only epistemic uncertainty, and alternative representations of epistemic uncertainty in QMU analyses involving a separation of aleatory and epistemic uncertainty. Analyses involving interval analysis, possibility theory and evidence theory are illustrated with the same two notional examples used in the presentation indicated above to illustrate the use of probability to represent aleatory and epistemic uncertainty in QMU analyses.

  11. Uncertainty analysis guide

    International Nuclear Information System (INIS)

    Andres, T.H.

    2002-05-01

    This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)

  12. Uncertainty analysis guide

    Energy Technology Data Exchange (ETDEWEB)

    Andres, T.H

    2002-05-01

    This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)

  13. Universal trend for heavy-ion total reaction cross sections at energies above the Coulomb barrier

    International Nuclear Information System (INIS)

    Tavares, O.A.P.; Medeiros, E.L.; Morcelle, V.

    2010-06-01

    Heavy-ion total reaction cross section measurements for more than one thousand one hundred reaction cases covering 61 target nuclei in the range 6 Li- 238 U, and 158 projectile nuclei from 2 H up to 84 Kr (mostly exotic ones) have been analysed in a systematic way by using an empirical, three-parameter formula which is applicable to cases for projectile kinetic energies above the Coulomb barrier. The analysis has shown that the average total nuclear binding energy per nucleon of the interacting nuclei and their radii are the chief quantities which describe the cross section patterns. A great number of cross section data (87%) has been quite satisfactorily reproduced by the proposed formula, therefore total reaction cross section predictions for new, not yet experimentally investigated reaction cases can be obtained within 25 percent (or much less) of uncertainty (author)

  14. Coupled code analysis of uncertainty and sensitivity of Kalinin-3 benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Pasichnyk, Ihor; Zwermann, Winfried; Velkov, Kiril [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Garching (Germany); Nikonov, Sergey [VNIIAES, Moscow (Russian Federation)

    2016-09-15

    An uncertainty and sensitivity analysis is performed for the OECD/NEA coolant transient Benchmark (K-3) on measured data at Kalinin-3 Nuclear Power Plant (NPP). A switch off of one main coolant pump (MCP) at nominal reactor power is calculated using a coupled thermohydraulic and neutron-kinetic ATHLET-PARCS code. The objectives are to study uncertainty of total reactor power and to identify the main sources of reactor power uncertainty. The GRS uncertainty and sensitivity software package XSUSA is applied to propagate uncertainties in nuclear data libraries to the full core coupled transient calculations. A set of most important thermal-hydraulic parameters of the primary circuit is identified and a total of 23 thermohydraulic parameters are statistically varied using GRS code SUSA. The ATHLET model contains also a balance-of-plant (BOP) model which is simulated using ATHLET GCSM module. In particular the operation of the main steam generator regulators is modelled in detail. A set of 200 varied coupled ATHLET-PARCS calculations is analyzed. The results obtained show a clustering effect in the behavior of global reactor parameters. It is found that the GCSM system together with varied input parameters strongly influence the overall nuclear power plant behavior and can even lead to a new scenario. Possible reasons of the clustering effect are discussed in the paper. This work is a step forward in establishing a ''best-estimate calculations in combination with performing uncertainty analysis'' methodology for coupled full core calculations.

  15. Citizen Candidates Under Uncertainty

    OpenAIRE

    Eguia, Jon X.

    2005-01-01

    In this paper we make two contributions to the growing literature on "citizen-candidate" models of representative democracy. First, we add uncertainty about the total vote count. We show that in a society with a large electorate, where the outcome of the election is uncertain and where winning candidates receive a large reward from holding office, there will be a two-candidate equilibrium and no equilibria with a single candidate. Second, we introduce a new concept of equilibrium, which we te...

  16. The Method of Manufactured Universes for validating uncertainty quantification methods

    KAUST Repository

    Stripling, H.F.

    2011-09-01

    The Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of statistical and modeling assumptions embedded in these methods. The framework calls for a manufactured reality from which experimental data are created (possibly with experimental error), an imperfect model (with uncertain inputs) from which simulation results are created (possibly with numerical error), the application of a system for quantifying uncertainties in model predictions, and an assessment of how accurately those uncertainties are quantified. The application presented in this paper manufactures a particle-transport universe, models it using diffusion theory with uncertain material parameters, and applies both Gaussian process and Bayesian MARS algorithms to make quantitative predictions about new experiments within the manufactured reality. The results of this preliminary study indicate that, even in a simple problem, the improper application of a specific UQ method or unrealized effects of a modeling assumption may produce inaccurate predictions. We conclude that the validation framework presented in this paper is a powerful and flexible tool for the investigation and understanding of UQ methodologies. © 2011 Elsevier Ltd. All rights reserved.

  17. On the EU approach for DEMO architecture exploration and dealing with uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Coleman, M., E-mail: matti.coleman@euro-fusion.org [EUROfusion Consortium, Boltzmannstraße 2, 85748 Garching (Germany); CCFE Fusion Association, Culham Science Centre, Abingdon, Oxfordshire OX14 3DB (United Kingdom); Maviglia, F.; Bachmann, C. [EUROfusion Consortium, Boltzmannstraße 2, 85748 Garching (Germany); Anthony, J. [CCFE Fusion Association, Culham Science Centre, Abingdon, Oxfordshire OX14 3DB (United Kingdom); Federici, G. [EUROfusion Consortium, Boltzmannstraße 2, 85748 Garching (Germany); Shannon, M. [EUROfusion Consortium, Boltzmannstraße 2, 85748 Garching (Germany); CCFE Fusion Association, Culham Science Centre, Abingdon, Oxfordshire OX14 3DB (United Kingdom); Wenninger, R. [EUROfusion Consortium, Boltzmannstraße 2, 85748 Garching (Germany); Max-Planck-Institut für Plasmaphysik, 85748 Garching (Germany)

    2016-11-01

    Highlights: • The issue of epistemic uncertainties in the DEMO design basis is described. • An approach to tackle uncertainty by investigating plant architectures is proposed. • The first wall heat load uncertainty is addressed following the proposed approach. - Abstract: One of the difficulties inherent in designing a future fusion reactor is dealing with uncertainty. As the major step between ITER and the commercial exploitation of nuclear fusion energy, DEMO will have to address many challenges – the natures of which are still not fully known. Unlike fission reactors, fusion reactors suffer from the intrinsic complexity of the tokamak (numerous interdependent system parameters) and from the dependence of plasma physics on scale – prohibiting design exploration founded on incremental progression and small-scale experimentation. For DEMO, this means that significant technical uncertainties will exist for some time to come, and a systems engineering design exploration approach must be developed to explore the reactor architecture when faced with these uncertainties. Important uncertainties in the context of fusion reactor design are discussed and a strategy for dealing with these is presented, treating the uncertainty in the first wall loads as an example.

  18. A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Campos, E [Argonne National Lab. (ANL), Argonne, IL (United States); Sisterson, Douglas [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-12-01

    The U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility is observationally based, and quantifying the uncertainty of its measurements is critically important. With over 300 widely differing instruments providing over 2,500 datastreams, concise expression of measurement uncertainty is quite challenging. The ARM Facility currently provides data and supporting metadata (information about the data or data quality) to its users through a number of sources. Because the continued success of the ARM Facility depends on the known quality of its measurements, the Facility relies on instrument mentors and the ARM Data Quality Office (DQO) to ensure, assess, and report measurement quality. Therefore, an easily accessible, well-articulated estimate of ARM measurement uncertainty is needed. Note that some of the instrument observations require mathematical algorithms (retrievals) to convert a measured engineering variable into a useful geophysical measurement. While those types of retrieval measurements are identified, this study does not address particular methods for retrieval uncertainty. As well, the ARM Facility also provides engineered data products, or value-added products (VAPs), based on multiple instrument measurements. This study does not include uncertainty estimates for those data products. We propose here that a total measurement uncertainty should be calculated as a function of the instrument uncertainty (calibration factors), the field uncertainty (environmental factors), and the retrieval uncertainty (algorithm factors). The study will not expand on methods for computing these uncertainties. Instead, it will focus on the practical identification, characterization, and inventory of the measurement uncertainties already available in the ARM community through the ARM instrument mentors and their ARM instrument handbooks. As a result, this study will address the first steps towards reporting ARM measurement uncertainty

  19. Relationships for Cost and Uncertainty of Decision Trees

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2013-01-01

    This chapter is devoted to the design of new tools for the study of decision trees. These tools are based on dynamic programming approach and need the consideration of subtables of the initial decision table. So this approach is applicable only to relatively small decision tables. The considered tools allow us to compute: 1. Theminimum cost of an approximate decision tree for a given uncertainty value and a cost function. 2. The minimum number of nodes in an exact decision tree whose depth is at most a given value. For the first tool we considered various cost functions such as: depth and average depth of a decision tree and number of nodes (and number of terminal and nonterminal nodes) of a decision tree. The uncertainty of a decision table is equal to the number of unordered pairs of rows with different decisions. The uncertainty of approximate decision tree is equal to the maximum uncertainty of a subtable corresponding to a terminal node of the tree. In addition to the algorithms for such tools we also present experimental results applied to various datasets acquired from UCI ML Repository [4]. © Springer-Verlag Berlin Heidelberg 2013.

  20. Uncertainties in the estimation of specific absorption rate during radiofrequency alternating magnetic field induced non-adiabatic heating of ferrofluids

    Science.gov (United States)

    Lahiri, B. B.; Ranoo, Surojit; Philip, John

    2017-11-01

    Magnetic fluid hyperthermia (MFH) is becoming a viable cancer treatment methodology where the alternating magnetic field induced heating of magnetic fluid is utilized for ablating the cancerous cells or making them more susceptible to the conventional treatments. The heating efficiency in MFH is quantified in terms of specific absorption rate (SAR), which is defined as the heating power generated per unit mass. In majority of the experimental studies, SAR is evaluated from the temperature rise curves, obtained under non-adiabatic experimental conditions, which is prone to various thermodynamic uncertainties. A proper understanding of the experimental uncertainties and its remedies is a prerequisite for obtaining accurate and reproducible SAR. Here, we study the thermodynamic uncertainties associated with peripheral heating, delayed heating, heat loss from the sample and spatial variation in the temperature profile within the sample. Using first order approximations, an adiabatic reconstruction protocol for the measured temperature rise curves is developed for SAR estimation, which is found to be in good agreement with those obtained from the computationally intense slope corrected method. Our experimental findings clearly show that the peripheral and delayed heating are due to radiation heat transfer from the heating coils and slower response time of the sensor, respectively. Our results suggest that the peripheral heating is linearly proportional to the sample area to volume ratio and coil temperature. It is also observed that peripheral heating decreases in presence of a non-magnetic insulating shielding. The delayed heating is found to contribute up to ~25% uncertainties in SAR values. As the SAR values are very sensitive to the initial slope determination method, explicit mention of the range of linear regression analysis is appropriate to reproduce the results. The effect of sample volume to area ratio on linear heat loss rate is systematically studied and the

  1. Uncertainties in the estimation of specific absorption rate during radiofrequency alternating magnetic field induced non-adiabatic heating of ferrofluids

    International Nuclear Information System (INIS)

    Lahiri, B B; Ranoo, Surojit; Philip, John

    2017-01-01

    Magnetic fluid hyperthermia (MFH) is becoming a viable cancer treatment methodology where the alternating magnetic field induced heating of magnetic fluid is utilized for ablating the cancerous cells or making them more susceptible to the conventional treatments. The heating efficiency in MFH is quantified in terms of specific absorption rate (SAR), which is defined as the heating power generated per unit mass. In majority of the experimental studies, SAR is evaluated from the temperature rise curves, obtained under non-adiabatic experimental conditions, which is prone to various thermodynamic uncertainties. A proper understanding of the experimental uncertainties and its remedies is a prerequisite for obtaining accurate and reproducible SAR. Here, we study the thermodynamic uncertainties associated with peripheral heating, delayed heating, heat loss from the sample and spatial variation in the temperature profile within the sample. Using first order approximations, an adiabatic reconstruction protocol for the measured temperature rise curves is developed for SAR estimation, which is found to be in good agreement with those obtained from the computationally intense slope corrected method. Our experimental findings clearly show that the peripheral and delayed heating are due to radiation heat transfer from the heating coils and slower response time of the sensor, respectively. Our results suggest that the peripheral heating is linearly proportional to the sample area to volume ratio and coil temperature. It is also observed that peripheral heating decreases in presence of a non-magnetic insulating shielding. The delayed heating is found to contribute up to ∼25% uncertainties in SAR values. As the SAR values are very sensitive to the initial slope determination method, explicit mention of the range of linear regression analysis is appropriate to reproduce the results. The effect of sample volume to area ratio on linear heat loss rate is systematically studied and

  2. Uncertainty analysis of the magnetic field measurement by the translating coil method in axisymmetric magnets

    International Nuclear Information System (INIS)

    Arpaia, Pasquale; De Vito, Luca; Kazazi, Mario

    2016-01-01

    In the uncertainty assessment of magnetic flux measurements in axially symmetric magnets by the translating coil method, the Guide to the Uncertainty in Measurement and its supplement cannot be applied: the voltage variation at the coil terminals, which is the actual measured quantity, affects the flux estimate and its uncertainty. In this paper, a particle filter, implementing a sequential Monte-Carlo method based on Bayesian inference, is applied. At this aim, the main uncertainty sources are analyzed and a model of the measurement process is defined. The results of the experimental validation point out the transport system and the acquisition system as the main contributions to the uncertainty budget. (authors)

  3. Uncertainties in the simulation of groundwater recharge at different scales

    Directory of Open Access Journals (Sweden)

    H. Bogena

    2005-01-01

    Full Text Available Digital spatial data always imply some kind of uncertainty. The source of this uncertainty can be found in their compilation as well as the conceptual design that causes a more or less exact abstraction of the real world, depending on the scale under consideration. Within the framework of hydrological modelling, in which numerous data sets from diverse sources of uneven quality are combined, the various uncertainties are accumulated. In this study, the GROWA model is taken as an example to examine the effects of different types of uncertainties on the calculated groundwater recharge. Distributed input errors are determined for the parameters' slope and aspect using a Monte Carlo approach. Landcover classification uncertainties are analysed by using the conditional probabilities of a remote sensing classification procedure. The uncertainties of data ensembles at different scales and study areas are discussed. The present uncertainty analysis showed that the Gaussian error propagation method is a useful technique for analysing the influence of input data on the simulated groundwater recharge. The uncertainties involved in the land use classification procedure and the digital elevation model can be significant in some parts of the study area. However, for the specific model used in this study it was shown that the precipitation uncertainties have the greatest impact on the total groundwater recharge error.

  4. A linear programming approach to characterizing norm bounded uncertainty from experimental data

    Science.gov (United States)

    Scheid, R. E.; Bayard, D. S.; Yam, Y.

    1991-01-01

    The linear programming spectral overbounding and factorization (LPSOF) algorithm, an algorithm for finding a minimum phase transfer function of specified order whose magnitude tightly overbounds a specified nonparametric function of frequency, is introduced. This method has direct application to transforming nonparametric uncertainty bounds (available from system identification experiments) into parametric representations required for modern robust control design software (i.e., a minimum-phase transfer function multiplied by a norm-bounded perturbation).

  5. The Findings from the OECD/NEA/CSNI UMS (Uncertainty Method Study)

    International Nuclear Information System (INIS)

    D'Auria, F.; Glaeser, H.

    2013-01-01

    Within licensing procedures there is the incentive to replace the conservative requirements for code application by a 'best estimate' concept supplemented by an uncertainty analysis to account for predictive uncertainties of code results. Methods have been developed to quantify these uncertainties. The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI (Committee on the Safety of Nuclear Installations) of OECD/NEA (Organization for Economic Cooperation and Development / Nuclear Energy Agency), has compared five methods for calculating the uncertainty in the predictions of advanced 'best estimate' thermal-hydraulic codes. Most of the methods identify and combine input uncertainties. The major differences between the predictions of the methods came from the choice of uncertain parameters and the quantification of the input uncertainties, i.e. the wideness of the uncertainty ranges. Therefore, suitable experimental and analytical information has to be selected to specify these uncertainty ranges or distributions. After the closure of the Uncertainty Method Study (UMS) and after the report was issued comparison calculations of experiment LSTF-SB-CL-18 were performed by University of Pisa using different versions of the RELAP 5 code. It turned out that the version used by two of the participants calculated a 170 K higher peak clad temperature compared with other versions using the same input deck. This may contribute to the differences of the upper limit of the uncertainty ranges. A 'bifurcation' analysis was also performed by the same research group also providing another way of interpreting the high temperature peak calculated by two of the participants. (authors)

  6. Uncertainty and validation. Effect of model complexity on uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Elert, M. [Kemakta Konsult AB, Stockholm (Sweden)] [ed.

    1996-09-01

    In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root

  7. Estimation of Uncertainty in Aerosol Concentration Measured by Aerosol Sampling System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Chan; Song, Yong Jae; Jung, Woo Young; Lee, Hyun Chul; Kim, Gyu Tae; Lee, Doo Yong [FNC Technology Co., Yongin (Korea, Republic of)

    2016-10-15

    FNC Technology Co., Ltd has been developed test facilities for the aerosol generation, mixing, sampling and measurement under high pressure and high temperature conditions. The aerosol generation system is connected to the aerosol mixing system which injects SiO{sub 2}/ethanol mixture. In the sampling system, glass fiber membrane filter has been used to measure average mass concentration. Based on the experimental results using main carrier gas of steam and air mixture, the uncertainty estimation of the sampled aerosol concentration was performed by applying Gaussian error propagation law. FNC Technology Co., Ltd. has been developed the experimental facilities for the aerosol measurement under high pressure and high temperature. The purpose of the tests is to develop commercial test module for aerosol generation, mixing and sampling system applicable to environmental industry and safety related system in nuclear power plant. For the uncertainty calculation of aerosol concentration, the value of the sampled aerosol concentration is not measured directly, but must be calculated from other quantities. The uncertainty of the sampled aerosol concentration is a function of flow rates of air and steam, sampled mass, sampling time, condensed steam mass and its absolute errors. These variables propagate to the combination of variables in the function. Using operating parameters and its single errors from the aerosol test cases performed at FNC, the uncertainty of aerosol concentration evaluated by Gaussian error propagation law is less than 1%. The results of uncertainty estimation in the aerosol sampling system will be utilized for the system performance data.

  8. Best Practices of Uncertainty Estimation for the National Solar Radiation Database (NSRDB 1998-2015): Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Habte, Aron M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sengupta, Manajit [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-12-19

    It is essential to apply a traceable and standard approach to determine the uncertainty of solar resource data. Solar resource data are used for all phases of solar energy conversion projects, from the conceptual phase to routine solar power plant operation, and to determine performance guarantees of solar energy conversion systems. These guarantees are based on the available solar resource derived from a measurement station or modeled data set such as the National Solar Radiation Database (NSRDB). Therefore, quantifying the uncertainty of these data sets provides confidence to financiers, developers, and site operators of solar energy conversion systems and ultimately reduces deployment costs. In this study, we implemented the Guide to the Expression of Uncertainty in Measurement (GUM) 1 to quantify the overall uncertainty of the NSRDB data. First, we start with quantifying measurement uncertainty, then we determine each uncertainty statistic of the NSRDB data, and we combine them using the root-sum-of-the-squares method. The statistics were derived by comparing the NSRDB data to the seven measurement stations from the National Oceanic and Atmospheric Administration's Surface Radiation Budget Network, National Renewable Energy Laboratory's Solar Radiation Research Laboratory, and the Atmospheric Radiation Measurement program's Southern Great Plains Central Facility, in Billings, Oklahoma. The evaluation was conducted for hourly values, daily totals, monthly mean daily totals, and annual mean monthly mean daily totals. Varying time averages assist to capture the temporal uncertainty of the specific modeled solar resource data required for each phase of a solar energy project; some phases require higher temporal resolution than others. Overall, by including the uncertainty of measurements of solar radiation made at ground stations, bias, and root mean square error, the NSRDB data demonstrated expanded uncertainty of 17 percent - 29 percent on hourly

  9. The Role of Threat Level and Intolerance of Uncertainty (IU) in Anxiety: An Experimental Test of IU Theory.

    Science.gov (United States)

    Oglesby, Mary E; Schmidt, Norman B

    2017-07-01

    Intolerance of uncertainty (IU) has been proposed as an important transdiagnostic variable within mood- and anxiety-related disorders. The extant literature has suggested that individuals high in IU interpret uncertainty more negatively. Furthermore, theoretical models of IU posit that those elevated in IU may experience an uncertain threat as more anxiety provoking than a certain threat. However, no research to date has experimentally manipulated the certainty of an impending threat while utilizing an in vivo stressor. In the current study, undergraduate participants (N = 79) were randomized to one of two conditions: certain threat (participants were told that later on in the study they would give a 3-minute speech) or uncertain threat (participants were told that later on in the study they would flip a coin to determine whether or not they would give a 3-minute speech). Participants also completed self-report questionnaires measuring their baseline state anxiety, baseline trait IU, and prespeech state anxiety. Results indicated that trait IU was associated with greater state anticipatory anxiety when the prospect of giving a speech was made uncertain (i.e., uncertain condition). Further, findings indicated no significant difference in anticipatory state anxiety among individuals high in IU when comparing an uncertain versus certain threat (i.e., uncertain and certain threat conditions, respectively). Furthermore, results found no significant interaction between condition and trait IU when predicting state anticipatory anxiety. This investigation is the first to test a crucial component of IU theory while utilizing an ecologically valid paradigm. Results of the present study are discussed in terms of theoretical models of IU and directions for future work. Copyright © 2017. Published by Elsevier Ltd.

  10. Health information seeking and the World Wide Web: an uncertainty management perspective.

    Science.gov (United States)

    Rains, Stephen A

    2014-01-01

    Uncertainty management theory was applied in the present study to offer one theoretical explanation for how individuals use the World Wide Web to acquire health information and to help better understand the implications of the Web for information seeking. The diversity of information sources available on the Web and potential to exert some control over the depth and breadth of one's information-acquisition effort is argued to facilitate uncertainty management. A total of 538 respondents completed a questionnaire about their uncertainty related to cancer prevention and information-seeking behavior. Consistent with study predictions, use of the Web for information seeking interacted with respondents' desired level of uncertainty to predict their actual level of uncertainty about cancer prevention. The results offer evidence that respondents who used the Web to search for cancer information were better able than were respondents who did not seek information to achieve a level of uncertainty commensurate with the level of uncertainty they desired.

  11. Validation and assessment of uncertainty of chemical tests as a tool for the reliability analysis of wastewater IPEN

    International Nuclear Information System (INIS)

    Silva, Renan A.; Martins, Elaine A.J.; Furusawa, Helio A.

    2011-01-01

    The validation of analytical methods has become an indispensable tool for the analysis in chemical laboratories, including being required for such accreditation. However, even if a laboratory using validated methods of analysis there is the possibility that these methods generate results discrepant with reality by making necessary the addition of a quantitative attribute (a value) which indicates the degree of certainty the extent or the analytical method used. This measure assigned to the result of measurement is called measurement uncertainty. We estimate this uncertainty with a level of confidence both direction, an analytical result has limited significance if not carried out proper assessment of its uncertainty. One of the activities of this work was to elaborate a program to help the validation and evaluation of uncertainty in chemical analysis. The program was developed with Visual Basic programming language and method of evaluation of uncertainty introduced the following concepts based on the GUM (Guide to the Expression of Uncertainty in Measurement). This evaluation program uncertainty measurement will be applied to chemical analysis in support of the characterization of the Nuclear Fuel Cycle developed by IPEN and the study of organic substances in wastewater associated with professional activities of the Institute. In the first case, primarily for the determination of total uranium and the second case for substances that were generated by human activities and that are contained in resolution 357/2005. As strategy for development of this work was considered the PDCA cycle to improve the efficiency of each step and minimize errors while performing the experimental part. The program should be validated to meet requirements of standards such as, for example, the standard ISO/IEC 17025. The application, it is projected to use in other analytical procedures of both the Nuclear Fuel Cycle and in the control program and chemical waste management of IPEN

  12. Validation and assessment of uncertainty of chemical tests as a tool for the reliability analysis of wastewater IPEN

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Renan A.; Martins, Elaine A.J.; Furusawa, Helio A., E-mail: elaine@ipen.br, E-mail: helioaf@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The validation of analytical methods has become an indispensable tool for the analysis in chemical laboratories, including being required for such accreditation. However, even if a laboratory using validated methods of analysis there is the possibility that these methods generate results discrepant with reality by making necessary the addition of a quantitative attribute (a value) which indicates the degree of certainty the extent or the analytical method used. This measure assigned to the result of measurement is called measurement uncertainty. We estimate this uncertainty with a level of confidence both direction, an analytical result has limited significance if not carried out proper assessment of its uncertainty. One of the activities of this work was to elaborate a program to help the validation and evaluation of uncertainty in chemical analysis. The program was developed with Visual Basic programming language and method of evaluation of uncertainty introduced the following concepts based on the GUM (Guide to the Expression of Uncertainty in Measurement). This evaluation program uncertainty measurement will be applied to chemical analysis in support of the characterization of the Nuclear Fuel Cycle developed by IPEN and the study of organic substances in wastewater associated with professional activities of the Institute. In the first case, primarily for the determination of total uranium and the second case for substances that were generated by human activities and that are contained in resolution 357/2005. As strategy for development of this work was considered the PDCA cycle to improve the efficiency of each step and minimize errors while performing the experimental part. The program should be validated to meet requirements of standards such as, for example, the standard ISO/IEC 17025. The application, it is projected to use in other analytical procedures of both the Nuclear Fuel Cycle and in the control program and chemical waste management of IPEN

  13. The uncertainty of reference standards--a guide to understanding factors impacting uncertainty, uncertainty calculations, and vendor certifications.

    Science.gov (United States)

    Gates, Kevin; Chang, Ning; Dilek, Isil; Jian, Huahua; Pogue, Sherri; Sreenivasan, Uma

    2009-10-01

    Certified solution standards are widely used in forensic toxicological, clinical/diagnostic, and environmental testing. Typically, these standards are purchased as ampouled solutions with a certified concentration. Vendors present concentration and uncertainty differently on their Certificates of Analysis. Understanding the factors that impact uncertainty and which factors have been considered in the vendor's assignment of uncertainty are critical to understanding the accuracy of the standard and the impact on testing results. Understanding these variables is also important for laboratories seeking to comply with ISO/IEC 17025 requirements and for those preparing reference solutions from neat materials at the bench. The impact of uncertainty associated with the neat material purity (including residual water, residual solvent, and inorganic content), mass measurement (weighing techniques), and solvent addition (solution density) on the overall uncertainty of the certified concentration is described along with uncertainty calculations.

  14. Heisenberg's principle of uncertainty and the uncertainty relations

    International Nuclear Information System (INIS)

    Redei, Miklos

    1987-01-01

    The usual verbal form of the Heisenberg uncertainty principle and the usual mathematical formulation (the so-called uncertainty theorem) are not equivalent. The meaning of the concept 'uncertainty' is not unambiguous and different interpretations are used in the literature. Recently a renewed interest has appeared to reinterpret and reformulate the precise meaning of Heisenberg's principle and to find adequate mathematical form. The suggested new theorems are surveyed and critically analyzed. (D.Gy.) 20 refs

  15. Measurement Uncertainty

    Science.gov (United States)

    Koch, Michael

    Measurement uncertainty is one of the key issues in quality assurance. It became increasingly important for analytical chemistry laboratories with the accreditation to ISO/IEC 17025. The uncertainty of a measurement is the most important criterion for the decision whether a measurement result is fit for purpose. It also delivers help for the decision whether a specification limit is exceeded or not. Estimation of measurement uncertainty often is not trivial. Several strategies have been developed for this purpose that will shortly be described in this chapter. In addition the different possibilities to take into account the uncertainty in compliance assessment are explained.

  16. UNCERTAINTIES IN GALACTIC CHEMICAL EVOLUTION MODELS

    International Nuclear Information System (INIS)

    Côté, Benoit; Ritter, Christian; Herwig, Falk; O’Shea, Brian W.; Pignatari, Marco; Jones, Samuel; Fryer, Chris L.

    2016-01-01

    We use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number of SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions, along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model

  17. A Comprehensive Validation Methodology for Sparse Experimental Data

    Science.gov (United States)

    Norman, Ryan B.; Blattnig, Steve R.

    2010-01-01

    A comprehensive program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as models are developed over time. The models are placed under configuration control, and automated validation tests are used so that comparisons can readily be made as models are improved. Though direct comparisons between theoretical results and experimental data are desired for validation purposes, such comparisons are not always possible due to lack of data. In this work, two uncertainty metrics are introduced that are suitable for validating theoretical models against sparse experimental databases. The nuclear physics models, NUCFRG2 and QMSFRG, are compared to an experimental database consisting of over 3600 experimental cross sections to demonstrate the applicability of the metrics. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by analyzing subsets of the model parameter space.

  18. Statistical Uncertainty Quantification of Physical Models during Reflood of LBLOCA

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Deog Yeon; Seul, Kwang Won; Woo, Sweng Woong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2015-05-15

    The use of the best-estimate (BE) computer codes in safety analysis for loss-of-coolant accident (LOCA) is the major trend in many countries to reduce the significant conservatism. A key feature of this BE evaluation requires the licensee to quantify the uncertainty of the calculations. So, it is very important how to determine the uncertainty distribution before conducting the uncertainty evaluation. Uncertainty includes those of physical model and correlation, plant operational parameters, and so forth. The quantification process is often performed mainly by subjective expert judgment or obtained from reference documents of computer code. In this respect, more mathematical methods are needed to reasonably determine the uncertainty ranges. The first uncertainty quantification are performed with the various increments for two influential uncertainty parameters to get the calculated responses and their derivatives. The different data set with two influential uncertainty parameters for FEBA tests, are chosen applying more strict criteria for selecting responses and their derivatives, which may be considered as the user’s effect in the CIRCÉ applications. Finally, three influential uncertainty parameters are considered to study the effect on the number of uncertainty parameters due to the limitation of CIRCÉ method. With the determined uncertainty ranges, uncertainty evaluations for FEBA tests are performed to check whether the experimental responses such as the cladding temperature or pressure drop are inside the limits of calculated uncertainty bounds. A confirmation step will be performed to evaluate the quality of the information in the case of the different reflooding PERICLES experiments. The uncertainty ranges of physical model in MARS-KS thermal-hydraulic code during the reflooding were quantified by CIRCÉ method using FEBA experiment tests, instead of expert judgment. Also, through the uncertainty evaluation for FEBA and PERICLES tests, it was confirmed

  19. Plurality of Type A evaluations of uncertainty

    Science.gov (United States)

    Possolo, Antonio; Pintar, Adam L.

    2017-10-01

    The evaluations of measurement uncertainty involving the application of statistical methods to measurement data (Type A evaluations as specified in the Guide to the Expression of Uncertainty in Measurement, GUM) comprise the following three main steps: (i) developing a statistical model that captures the pattern of dispersion or variability in the experimental data, and that relates the data either to the measurand directly or to some intermediate quantity (input quantity) that the measurand depends on; (ii) selecting a procedure for data reduction that is consistent with this model and that is fit for the purpose that the results are intended to serve; (iii) producing estimates of the model parameters, or predictions based on the fitted model, and evaluations of uncertainty that qualify either those estimates or these predictions, and that are suitable for use in subsequent uncertainty propagation exercises. We illustrate these steps in uncertainty evaluations related to the measurement of the mass fraction of vanadium in a bituminous coal reference material, including the assessment of the homogeneity of the material, and to the calibration and measurement of the amount-of-substance fraction of a hydrochlorofluorocarbon in air, and of the age of a meteorite. Our goal is to expose the plurality of choices that can reasonably be made when taking each of the three steps outlined above, and to show that different choices typically lead to different estimates of the quantities of interest, and to different evaluations of the associated uncertainty. In all the examples, the several alternatives considered represent choices that comparably competent statisticians might make, but who differ in the assumptions that they are prepared to rely on, and in their selection of approach to statistical inference. They represent also alternative treatments that the same statistician might give to the same data when the results are intended for different purposes.

  20. Estimated Uncertainties in the Idaho National Laboratory Matched-Index-of-Refraction Lower Plenum Experiment

    International Nuclear Information System (INIS)

    Donald M. McEligot; Hugh M. McIlroy, Jr.; Ryan C. Johnson

    2007-01-01

    The purpose of the fluid dynamics experiments in the MIR (Matched-Index-of-Refraction) flow system at Idaho National Laboratory (INL) is to develop benchmark databases for the assessment of Computational Fluid Dynamics (CFD) solutions of the momentum equations, scalar mixing, and turbulence models for typical Very High Temperature Reactor (VHTR) plenum geometries in the limiting case of negligible buoyancy and constant fluid properties. The experiments use optical techniques, primarily particle image velocimetry (PIV) in the INL MIR flow system. The benefit of the MIR technique is that it permits optical measurements to determine flow characteristics in passages and around objects to be obtained without locating a disturbing transducer in the flow field and without distortion of the optical paths. The objective of the present report is to develop understanding of the magnitudes of experimental uncertainties in the results to be obtained in such experiments. Unheated MIR experiments are first steps when the geometry is complicated. One does not want to use a computational technique, which will not even handle constant properties properly. This report addresses the general background, requirements for benchmark databases, estimation of experimental uncertainties in mean velocities and turbulence quantities, the MIR experiment, PIV uncertainties, positioning uncertainties, and other contributing measurement uncertainties

  1. The Impact of Regression to the Mean on Economic Evaluation in Quasi-Experimental Pre-Post Studies: The Example of Total Knee Replacement Using Data from the Osteoarthritis Initiative.

    Science.gov (United States)

    Schilling, Chris; Petrie, Dennis; Dowsey, Michelle M; Choong, Peter F; Clarke, Philip

    2017-12-01

    Many treatments are evaluated using quasi-experimental pre-post studies susceptible to regression to the mean (RTM). Ignoring RTM could bias the economic evaluation. We investigated this issue using the contemporary example of total knee replacement (TKR), a common treatment for end-stage osteoarthritis of the knee. Data (n = 4796) were obtained from the Osteoarthritis Initiative database, a longitudinal observational study of osteoarthritis. TKR patients (n = 184) were matched to non-TKR patients, using propensity score matching on the predicted hazard of TKR and exact matching on osteoarthritis severity and health-related quality of life (HrQoL). The economic evaluation using the matched control group was compared to the standard method of using the pre-surgery score as the control. Matched controls were identified for 56% of the primary TKRs. The matched control HrQoL trajectory showed evidence of RTM accounting for a third of the estimated QALY gains from surgery using the pre-surgery HrQoL as the control. Incorporating RTM into the economic evaluation significantly reduced the estimated cost effectiveness of TKR and increased the uncertainty. A generalized ICER bias correction factor was derived to account for RTM in cost-effectiveness analysis. RTM should be considered in economic evaluations based on quasi-experimental pre-post studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Analysis of uncertainties in the IAEA/WHO TLD postal dose audit system

    Energy Technology Data Exchange (ETDEWEB)

    Izewska, J. [Department of Nuclear Sciences and Applications, International Atomic Energy Agency, Wagramer Strasse 5, Vienna (Austria)], E-mail: j.izewska@iaea.org; Hultqvist, M. [Department of Medical Radiation Physics, Karolinska Institute, Stockholm University, Stockholm (Sweden); Bera, P. [Department of Nuclear Sciences and Applications, International Atomic Energy Agency, Wagramer Strasse 5, Vienna (Austria)

    2008-02-15

    The International Atomic Energy Agency (IAEA) and the World Health Organisation (WHO) operate the IAEA/WHO TLD postal dose audit programme. Thermoluminescence dosimeters (TLDs) are used as transfer devices in this programme. In the present work the uncertainties in the dose determination from TLD measurements have been evaluated. The analysis of uncertainties comprises uncertainties in the calibration coefficient of the TLD system and uncertainties in factors correcting for dose response non-linearity, fading of TL signal, energy response and influence of TLD holder. The individual uncertainties have been combined to estimate the total uncertainty in the dose evaluated from TLD measurements. The combined relative standard uncertainty in the dose determined from TLD measurements has been estimated to be 1.2% for irradiations with Co-60 {gamma}-rays and 1.6% for irradiations with high-energy X-rays. Results from irradiations by the Bureau international des poids et mesures (BIPM), Primary Standard Dosimetry Laboratories (PSDLs) and Secondary Standards Dosimetry Laboratories (SSDLs) compare favourably with the estimated uncertainties, whereas TLD results of radiotherapy centres show higher standard deviations than those derived theoretically.

  3. Evaluation of thermal-hydraulic parameter uncertainties in a TRIGA research reactor

    International Nuclear Information System (INIS)

    Mesquita, Amir Z.; Costa, Antonio C.L.; Ladeira, Luiz C.D.; Rezende, Hugo C.; Palma, Daniel A.P.

    2015-01-01

    Experimental studies had been performed in the TRIGA Research Nuclear Reactor of CDTN/CNEN to find out the its thermal hydraulic parameters. Fuel to coolant heat transfer patterns must be evaluated as function of the reactor power in order to assess the thermal hydraulic performance of the core. The heat generated by nuclear fission in the reactor core is transferred from fuel elements to the cooling system through the fuel-cladding (gap) and the cladding to coolant interfaces. As the reactor core power increases the heat transfer regime from the fuel cladding to the coolant changes from single-phase natural convection to subcooled nucleate boiling. This paper presents the uncertainty analysis in the results of the thermal hydraulics experiments performed. The methodology used to evaluate the propagation of uncertainty in the results was done based on the pioneering article of Kline and McClintock, with the propagation of uncertainties based on the specification of uncertainties in various primary measurements. The uncertainty analysis on thermal hydraulics parameters of the CDTN TRIGA fuel element is determined, basically, by the uncertainty of the reactor's thermal power. (author)

  4. Robust Bayesian Experimental Design for Conceptual Model Discrimination

    Science.gov (United States)

    Pham, H. V.; Tsai, F. T. C.

    2015-12-01

    A robust Bayesian optimal experimental design under uncertainty is presented to provide firm information for model discrimination, given the least number of pumping wells and observation wells. Firm information is the maximum information of a system can be guaranteed from an experimental design. The design is based on the Box-Hill expected entropy decrease (EED) before and after the experiment design and the Bayesian model averaging (BMA) framework. A max-min programming is introduced to choose the robust design that maximizes the minimal Box-Hill EED subject to that the highest expected posterior model probability satisfies a desired probability threshold. The EED is calculated by the Gauss-Hermite quadrature. The BMA method is used to predict future observations and to quantify future observation uncertainty arising from conceptual and parametric uncertainties in calculating EED. Monte Carlo approach is adopted to quantify the uncertainty in the posterior model probabilities. The optimal experimental design is tested by a synthetic 5-layer anisotropic confined aquifer. Nine conceptual groundwater models are constructed due to uncertain geological architecture and boundary condition. High-performance computing is used to enumerate all possible design solutions in order to identify the most plausible groundwater model. Results highlight the impacts of scedasticity in future observation data as well as uncertainty sources on potential pumping and observation locations.

  5. Universal trend for heavy-ion total reaction cross sections at energies above the Coulomb barrier

    Energy Technology Data Exchange (ETDEWEB)

    Tavares, O.A.P.; Medeiros, E.L., E-mail: emil@cbpf.b [Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil); Morcelle, V. [Universidade de Sao Paulo (IF/USP), SP (Brazil). Inst. de Fisica

    2010-06-15

    Heavy-ion total reaction cross section measurements for more than one thousand one hundred reaction cases covering 61 target nuclei in the range {sup 6}Li-{sup 238}U, and 158 projectile nuclei from {sup 2}H up to {sup 84}Kr (mostly exotic ones) have been analysed in a systematic way by using an empirical, three-parameter formula which is applicable to cases for projectile kinetic energies above the Coulomb barrier. The analysis has shown that the average total nuclear binding energy per nucleon of the interacting nuclei and their radii are the chief quantities which describe the cross section patterns. A great number of cross section data (87%) has been quite satisfactorily reproduced by the proposed formula, therefore total reaction cross section predictions for new, not yet experimentally investigated reaction cases can be obtained within 25 percent (or much less) of uncertainty (author)

  6. Risk, unexpected uncertainty, and estimation uncertainty: Bayesian learning in unstable settings.

    Directory of Open Access Journals (Sweden)

    Elise Payzan-LeNestour

    Full Text Available Recently, evidence has emerged that humans approach learning using Bayesian updating rather than (model-free reinforcement algorithms in a six-arm restless bandit problem. Here, we investigate what this implies for human appreciation of uncertainty. In our task, a Bayesian learner distinguishes three equally salient levels of uncertainty. First, the Bayesian perceives irreducible uncertainty or risk: even knowing the payoff probabilities of a given arm, the outcome remains uncertain. Second, there is (parameter estimation uncertainty or ambiguity: payoff probabilities are unknown and need to be estimated. Third, the outcome probabilities of the arms change: the sudden jumps are referred to as unexpected uncertainty. We document how the three levels of uncertainty evolved during the course of our experiment and how it affected the learning rate. We then zoom in on estimation uncertainty, which has been suggested to be a driving force in exploration, in spite of evidence of widespread aversion to ambiguity. Our data corroborate the latter. We discuss neural evidence that foreshadowed the ability of humans to distinguish between the three levels of uncertainty. Finally, we investigate the boundaries of human capacity to implement Bayesian learning. We repeat the experiment with different instructions, reflecting varying levels of structural uncertainty. Under this fourth notion of uncertainty, choices were no better explained by Bayesian updating than by (model-free reinforcement learning. Exit questionnaires revealed that participants remained unaware of the presence of unexpected uncertainty and failed to acquire the right model with which to implement Bayesian updating.

  7. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  8. Kalman filter approach for uncertainty quantification in time-resolved laser-induced incandescence.

    Science.gov (United States)

    Hadwin, Paul J; Sipkens, Timothy A; Thomson, Kevin A; Liu, Fengshan; Daun, Kyle J

    2018-03-01

    Time-resolved laser-induced incandescence (TiRe-LII) data can be used to infer spatially and temporally resolved volume fractions and primary particle size distributions of soot-laden aerosols, but these estimates are corrupted by measurement noise as well as uncertainties in the spectroscopic and heat transfer submodels used to interpret the data. Estimates of the temperature, concentration, and size distribution of soot primary particles within a sample aerosol are typically made by nonlinear regression of modeled spectral incandescence decay, or effective temperature decay, to experimental data. In this work, we employ nonstationary Bayesian estimation techniques to infer aerosol properties from simulated and experimental LII signals, specifically the extended Kalman filter and Schmidt-Kalman filter. These techniques exploit the time-varying nature of both the measurements and the models, and they reveal how uncertainty in the estimates computed from TiRe-LII data evolves over time. Both techniques perform better when compared with standard deterministic estimates; however, we demonstrate that the Schmidt-Kalman filter produces more realistic uncertainty estimates.

  9. Uncertainty analysis of thermal quantities measurement in a centrifugal compressor

    Science.gov (United States)

    Hurda, Lukáš; Matas, Richard

    2017-09-01

    Compressor performance characteristics evaluation process based on the measurement of pressure, temperature and other quantities is examined to find uncertainties for directly measured and derived quantities. CFD is used as a tool to quantify the influences of different sources of uncertainty of measurements for single- and multi-thermocouple total temperature probes. The heat conduction through the body of the thermocouple probe and the heat-up of the air in the intake piping are the main phenomena of interest.

  10. Enhancing uncertainty tolerance in the modelling creep of ligaments

    International Nuclear Information System (INIS)

    Taha, M M Reda; Lucero, J

    2006-01-01

    The difficulty in performing biomechanical tests and the scarcity of biomechanical experimental databases necessitate extending the current knowledge base to allow efficient modelling using limited data sets. This study suggests a framework to reduce uncertainties in biomechanical systems using limited data sets. The study also shows how sparse data and epistemic input can be exploited using fuzzy logic to represent biomechanical relations. An example application to model collagen fibre recruitment in the medial collateral ligaments during time-dependent deformation under cyclic loading (creep) is presented. The study suggests a quality metric that can be employed to observe and enhance uncertainty tolerance in the modelling process

  11. Inherent uncertainties in meteorological parameters for wind turbine design

    Science.gov (United States)

    Doran, J. C.

    1982-01-01

    Major difficulties associated with meteorological measurments such as the inability to duplicate the experimental conditions from one day to the next are discussed. This lack of consistency is compounded by the stochastic nature of many of the meteorological variables of interest. Moreover, simple relationships derived in one location may be significantly altered by topographical or synoptic differences encountered at another. The effect of such factors is a degree of inherent uncertainty if an attempt is made to describe the atmosphere in terms of universal laws. Some of these uncertainties and their causes are examined, examples are presented and some implications for wind turbine design are suggested.

  12. [Influence of Uncertainty and Uncertainty Appraisal on Self-management in Hemodialysis Patients].

    Science.gov (United States)

    Jang, Hyung Suk; Lee, Chang Suk; Yang, Young Hee

    2015-04-01

    This study was done to examine the relation of uncertainty, uncertainty appraisal, and self-management in patients undergoing hemodialysis, and to identify factors influencing self-management. A convenience sample of 92 patients receiving hemodialysis was selected. Data were collected using a structured questionnaire and medical records. The collected data were analyzed using descriptive statistics, t-test, ANOVA, Pearson correlations and multiple regression analysis with the SPSS/WIN 20.0 program. The participants showed a moderate level of uncertainty with the highest score being for ambiguity among the four uncertainty subdomains. Scores for uncertainty danger or opportunity appraisals were under the mid points. The participants were found to perform a high level of self-management such as diet control, management of arteriovenous fistula, exercise, medication, physical management, measurements of body weight and blood pressure, and social activity. The self-management of participants undergoing hemodialysis showed a significant relationship with uncertainty and uncertainty appraisal. The significant factors influencing self-management were uncertainty, uncertainty opportunity appraisal, hemodialysis duration, and having a spouse. These variables explained 32.8% of the variance in self-management. The results suggest that intervention programs to reduce the level of uncertainty and to increase the level of uncertainty opportunity appraisal among patients would improve the self-management of hemodialysis patients.

  13. Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation

    International Nuclear Information System (INIS)

    Helgesson, P.; Sjöstrand, H.; Koning, A.J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.

    2016-01-01

    In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also

  14. Characterization of the energy-dependent uncertainty and correlation in silicon neutron displacement damage metrics

    Directory of Open Access Journals (Sweden)

    Griffin Patrick

    2017-01-01

    Full Text Available A rigorous treatment of the uncertainty in the underlying nuclear data on silicon displacement damage metrics is presented. The uncertainty in the cross sections and recoil atom spectra are propagated into the energy-dependent uncertainty contribution in the silicon displacement kerma and damage energy using a Total Monte Carlo treatment. An energy-dependent covariance matrix is used to characterize the resulting uncertainty. A strong correlation between different reaction channels is observed in the high energy neutron contributions to the displacement damage metrics which supports the necessity of using a Monte Carlo based method to address the nonlinear nature of the uncertainty propagation.

  15. Fuel cycle cost uncertainty from nuclear fuel cycle comparison

    International Nuclear Information System (INIS)

    Li, J.; McNelis, D.; Yim, M.S.

    2013-01-01

    This paper examined the uncertainty in fuel cycle cost (FCC) calculation by considering both model and parameter uncertainty. Four different fuel cycle options were compared in the analysis including the once-through cycle (OT), the DUPIC cycle, the MOX cycle and a closed fuel cycle with fast reactors (FR). The model uncertainty was addressed by using three different FCC modeling approaches with and without the time value of money consideration. The relative ratios of FCC in comparison to OT did not change much by using different modeling approaches. This observation was consistent with the results of the sensitivity study for the discount rate. Two different sets of data with uncertainty range of unit costs were used to address the parameter uncertainty of the FCC calculation. The sensitivity study showed that the dominating contributor to the total variance of FCC is the uranium price. In general, the FCC of OT was found to be the lowest followed by FR, MOX, and DUPIC. But depending on the uranium price, the FR cycle was found to have lower FCC over OT. The reprocessing cost was also found to have a major impact on FCC

  16. Quantifying and managing uncertainty in operational modal analysis

    Science.gov (United States)

    Au, Siu-Kui; Brownjohn, James M. W.; Mottershead, John E.

    2018-03-01

    Operational modal analysis aims at identifying the modal properties (natural frequency, damping, etc.) of a structure using only the (output) vibration response measured under ambient conditions. Highly economical and feasible, it is becoming a common practice in full-scale vibration testing. In the absence of (input) loading information, however, the modal properties have significantly higher uncertainty than their counterparts identified from free or forced vibration (known input) tests. Mastering the relationship between identification uncertainty and test configuration is of great interest to both scientists and engineers, e.g., for achievable precision limits and test planning/budgeting. Addressing this challenge beyond the current state-of-the-art that are mostly concerned with identification algorithms, this work obtains closed form analytical expressions for the identification uncertainty (variance) of modal parameters that fundamentally explains the effect of test configuration. Collectively referred as 'uncertainty laws', these expressions are asymptotically correct for well-separated modes, small damping and long data; and are applicable under non-asymptotic situations. They provide a scientific basis for planning and standardization of ambient vibration tests, where factors such as channel noise, sensor number and location can be quantitatively accounted for. The work is reported comprehensively with verification through synthetic and experimental data (laboratory and field), scientific implications and practical guidelines for planning ambient vibration tests.

  17. Uncertainty associated with the gravimetric measurement of particulate matter concentration in ambient air.

    Science.gov (United States)

    Lacey, Ronald E; Faulkner, William Brock

    2015-07-01

    This work applied a propagation of uncertainty method to typical total suspended particulate (TSP) sampling apparatus in order to estimate the overall measurement uncertainty. The objectives of this study were to estimate the uncertainty for three TSP samplers, develop an uncertainty budget, and determine the sensitivity of the total uncertainty to environmental parameters. The samplers evaluated were the TAMU High Volume TSP Sampler at a nominal volumetric flow rate of 1.42 m3 min(-1) (50 CFM), the TAMU Low Volume TSP Sampler at a nominal volumetric flow rate of 17 L min(-1) (0.6 CFM) and the EPA TSP Sampler at the nominal volumetric flow rates of 1.1 and 1.7 m3 min(-1) (39 and 60 CFM). Under nominal operating conditions the overall measurement uncertainty was found to vary from 6.1x10(-6) g m(-3) to 18.0x10(-6) g m(-3), which represented an uncertainty of 1.7% to 5.2% of the measurement. Analysis of the uncertainty budget determined that three of the instrument parameters contributed significantly to the overall uncertainty: the uncertainty in the pressure drop measurement across the orifice meter during both calibration and testing and the uncertainty of the airflow standard used during calibration of the orifice meter. Five environmental parameters occurring during field measurements were considered for their effect on overall uncertainty: ambient TSP concentration, volumetric airflow rate, ambient temperature, ambient pressure, and ambient relative humidity. Of these, only ambient TSP concentration and volumetric airflow rate were found to have a strong effect on the overall uncertainty. The technique described in this paper can be applied to other measurement systems and is especially useful where there are no methods available to generate these values empirically. This work addresses measurement uncertainty of TSP samplers used in ambient conditions. Estimation of uncertainty in gravimetric measurements is of particular interest, since as ambient particulate

  18. Uncertainty Assessments in Fast Neutron Activation Analysis

    International Nuclear Information System (INIS)

    W. D. James; R. Zeisler

    2000-01-01

    Fast neutron activation analysis (FNAA) carried out with the use of small accelerator-based neutron generators is routinely used for major/minor element determinations in industry, mineral and petroleum exploration, and to some extent in research. While the method shares many of the operational procedures and therefore errors inherent to conventional thermal neutron activation analysis, its unique implementation gives rise to additional specific concerns that can result in errors or increased uncertainties of measured quantities. The authors were involved in a recent effort to evaluate irreversible incorporation of oxygen into a standard reference material (SRM) by direct measurement of oxygen by FNAA. That project required determination of oxygen in bottles of the SRM stored in varying environmental conditions and a comparison of the results. We recognized the need to accurately describe the total uncertainty of the measurements to accurately characterize any differences in the resulting average concentrations. It is our intent here to discuss the breadth of potential parameters that have the potential to contribute to the random and nonrandom errors of the method and provide estimates of the magnitude of uncertainty introduced. In addition, we will discuss the steps taken in this recent FNAA project to control quality, assess the uncertainty of the measurements, and evaluate results based on the statistical reproducibility

  19. Uncertainties in the proton lifetime

    International Nuclear Information System (INIS)

    Ellis, J.; Nanopoulos, D.V.; Rudaz, S.; Gaillard, M.K.

    1980-04-01

    We discuss the masses of the leptoquark bosons m(x) and the proton lifetime in Grand Unified Theories based principally on SU(5). It is emphasized that estimates of m(x) based on the QCD coupling and the fine structure constant are probably more reliable than those using the experimental value of sin 2 theta(w). Uncertainties in the QCD Λ parameter and the correct value of α are discussed. We estimate higher order effects on the evolution of coupling constants in a momentum space renormalization scheme. It is shown that increasing the number of generations of fermions beyond the minimal three increases m(X) by almost a factor of 2 per generation. Additional uncertainties exist for each generation of technifermions that may exist. We discuss and discount the possibility that proton decay could be 'Cabibbo-rotated' away, and a speculation that Lorentz invariance may be violated in proton decay at a detectable level. We estimate that in the absence of any substantial new physics beyond that in the minimal SU(5) model the proton lifetimes is 8 x 10 30+-2 years

  20. Advanced probabilistic methods for quantifying the effects of various uncertainties in structural response

    Science.gov (United States)

    Nagpal, Vinod K.

    1988-01-01

    The effects of actual variations, also called uncertainties, in geometry and material properties on the structural response of a space shuttle main engine turbopump blade are evaluated. A normal distribution was assumed to represent the uncertainties statistically. Uncertainties were assumed to be totally random, partially correlated, and fully correlated. The magnitude of these uncertainties were represented in terms of mean and variance. Blade responses, recorded in terms of displacements, natural frequencies, and maximum stress, was evaluated and plotted in the form of probabilistic distributions under combined uncertainties. These distributions provide an estimate of the range of magnitudes of the response and probability of occurrence of a given response. Most importantly, these distributions provide the information needed to estimate quantitatively the risk in a structural design.

  1. Total photon absorption

    International Nuclear Information System (INIS)

    Carlos, P.

    1985-06-01

    The present discussion is limited to a presentation of the most recent total photonuclear absorption experiments performed with real photons at intermediate energy, and more precisely in the region of nucleon resonances. The main sources of real photons are briefly reviewed and the experimental procedures used for total photonuclear absorption cross section measurements. The main results obtained below 140 MeV photon energy as well as above 2 GeV are recalled. The experimental study of total photonuclear absorption in the nuclear resonance region (140 MeV< E<2 GeV) is still at its beginning and some results are presented

  2. WE-B-19A-01: SRT II: Uncertainties in SRT

    International Nuclear Information System (INIS)

    Dieterich, S; Schlesinger, D; Geneser, S

    2014-01-01

    SRS delivery has undergone major technical changes in the last decade, transitioning from predominantly frame-based treatment delivery to imageguided, frameless SRS. It is important for medical physicists working in SRS to understand the magnitude and sources of uncertainty involved in delivering SRS treatments for a multitude of technologies (Gamma Knife, CyberKnife, linac-based SRS and protons). Sources of SRS planning and delivery uncertainty include dose calculation, dose fusion, and intra- and inter-fraction motion. Dose calculations for small fields are particularly difficult because of the lack of electronic equilibrium and greater effect of inhomogeneities within and near the PTV. Going frameless introduces greater setup uncertainties that allows for potentially increased intra- and interfraction motion, The increased use of multiple imaging modalities to determine the tumor volume, necessitates (deformable) image and contour fusion, and the resulting uncertainties introduced in the image registration process further contribute to overall treatment planning uncertainties. Each of these uncertainties must be quantified and their impact on treatment delivery accuracy understood. If necessary, the uncertainties may then be accounted for during treatment planning either through techniques to make the uncertainty explicit, or by the appropriate addition of PTV margins. Further complicating matters, the statistics of 1-5 fraction SRS treatments differ from traditional margin recipes relying on Poisson statistics. In this session, we will discuss uncertainties introduced during each step of the SRS treatment planning and delivery process and present margin recipes to appropriately account for such uncertainties. Learning Objectives: To understand the major contributors to the total delivery uncertainty in SRS for Gamma Knife, CyberKnife, and linac-based SRS. Learn the various uncertainties introduced by image fusion, deformable image registration, and contouring

  3. Assessment of uncertainty associated with measuring exposure to radon and decay products in the French uranium miners cohort

    International Nuclear Information System (INIS)

    Allodji, Rodrigue S; Leuraud, Klervi; Laurier, Dominique; Bernhard, Sylvain; Henry, Stéphane; Bénichou, Jacques

    2012-01-01

    The reliability of exposure data directly affects the reliability of the risk estimates derived from epidemiological studies. Measurement uncertainty must be known and understood before it can be corrected. The literature on occupational exposure to radon ( 222 Rn) and its decay products reveals only a few epidemiological studies in which uncertainty has been accounted for explicitly. This work examined the sources, nature, distribution and magnitude of uncertainty of the exposure of French uranium miners to radon ( 222 Rn) and its decay products. We estimated the total size of uncertainty for this exposure with the root sum square (RSS) method, which may be an alternative when repeated measures are not available. As a result, we identified six main sources of uncertainty. The total size of the uncertainty decreased from about 47% in the period 1956–1974 to 10% after 1982, illustrating the improvement in the radiological monitoring system over time.

  4. Charm quark mass with calibrated uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Erler, Jens [Universidad Nacional Autonoma de Mexico, Instituto de Fisica, Mexico, DF (Mexico); Masjuan, Pere [Universitat Autonoma de Barcelona, Grup de Fisica Teorica, Departament de Fisica, Barcelona (Spain); Institut de Fisica d' Altes Energies (IFAE), The Barcelona Institute of Science and Technology (BIST), Barcelona (Spain); Spiesberger, Hubert [Johannes Gutenberg-Universitaet, PRISMA Cluster of Excellence, Institut fuer Physik, Mainz (Germany); University of Cape Town, Centre for Theoretical and Mathematical Physics and Department of Physics, Rondebosch (South Africa)

    2017-02-15

    We determine the charm quark mass m{sub c} from QCD sum rules of the moments of the vector current correlator calculated in perturbative QCD at O(α{sub s}{sup 3}). Only experimental data for the charm resonances below the continuum threshold are needed in our approach, while the continuum contribution is determined by requiring self-consistency between various sum rules, including the one for the zeroth moment. Existing data from the continuum region can then be used to bound the theoretic uncertainty. Our result is m{sub c}(m{sub c}) = 1272 ± 8 MeV for α{sub s}(M{sub Z}) = 0.1182, where the central value is in very good agreement with other recent determinations based on the relativistic sum rule approach. On the other hand, there is considerably less agreement regarding the theory dominated uncertainty and we pay special attention to the question how to quantify and justify it. (orig.)

  5. Measurement, simulation and uncertainty assessment of implant heating during MRI

    International Nuclear Information System (INIS)

    Neufeld, E; Kuehn, S; Kuster, N; Szekely, G

    2009-01-01

    The heating of tissues around implants during MRI can pose severe health risks, and careful evaluation is required for leads to be labeled as MR conditionally safe. A recent interlaboratory comparison study has shown that different groups can produce widely varying results (sometimes with more than a factor of 5 difference) when performing measurements according to current guidelines. To determine the related difficulties and to derive optimized procedures, two different generic lead structures have been investigated in this study by using state-of-the-art temperature and dosimetric probes, as well as simulations for which detailed uncertainty budgets have been determined. The agreement between simulations and measurements is well within the combined uncertainty. The study revealed that the uncertainty can be kept below 17% if appropriate instrumentation and procedures are applied. Optimized experimental assessment techniques can be derived from the findings presented herein.

  6. Measurement, simulation and uncertainty assessment of implant heating during MRI

    Energy Technology Data Exchange (ETDEWEB)

    Neufeld, E; Kuehn, S; Kuster, N [Foundation for Research on Information Technologies in Society (IT' IS), Zeughausstr. 43, 8004 Zurich (Switzerland); Szekely, G [Computer Vision Laboratory, Swiss Federal Institute of Technology (ETHZ), Sternwartstr 7, ETH Zentrum, 8092 Zurich (Switzerland)], E-mail: neufeld@itis.ethz.ch

    2009-07-07

    The heating of tissues around implants during MRI can pose severe health risks, and careful evaluation is required for leads to be labeled as MR conditionally safe. A recent interlaboratory comparison study has shown that different groups can produce widely varying results (sometimes with more than a factor of 5 difference) when performing measurements according to current guidelines. To determine the related difficulties and to derive optimized procedures, two different generic lead structures have been investigated in this study by using state-of-the-art temperature and dosimetric probes, as well as simulations for which detailed uncertainty budgets have been determined. The agreement between simulations and measurements is well within the combined uncertainty. The study revealed that the uncertainty can be kept below 17% if appropriate instrumentation and procedures are applied. Optimized experimental assessment techniques can be derived from the findings presented herein.

  7. Uncertainty Evaluation with Multi-Dimensional Model of LBLOCA in OPR1000 Plant

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jieun; Oh, Deog Yeon; Seul, Kwang-Won; Lee, Jin Ho [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2016-10-15

    KINS has used KINS-REM (KINS-Realistic Evaluation Methodology) which developed for Best- Estimate (BE) calculation and uncertainty quantification for regulatory audit. This methodology has been improved continuously by numerous studies, such as uncertainty parameters and uncertainty ranges. In this study, to evaluate the applicability of improved KINS-REM for OPR1000 plant, uncertainty evaluation with multi-dimensional model for confirming multi-dimensional phenomena was conducted with MARS-KS code. In this study, the uncertainty evaluation with multi- dimensional model of OPR1000 plant was conducted for confirming the applicability of improved KINS- REM The reactor vessel modeled using MULTID component of MARS-KS code, and total 29 uncertainty parameters were considered by 124 sampled calculations. Through 124 calculations using Mosaique program with MARS-KS code, peak cladding temperature was calculated and final PCT was determined by the 3rd order Wilks' formula. The uncertainty parameters which has strong influence were investigated by Pearson coefficient analysis. They were mostly related with plant operation and fuel material properties. Evaluation results through the 124 calculations and sensitivity analysis show that improved KINS-REM could be reasonably applicable for uncertainty evaluation with multi-dimensional model calculations of OPR1000 plants.

  8. Time-Resolved Particle Image Velocimetry Measurements with Wall Shear Stress and Uncertainty Quantification for the FDA Nozzle Model.

    Science.gov (United States)

    Raben, Jaime S; Hariharan, Prasanna; Robinson, Ronald; Malinauskas, Richard; Vlachos, Pavlos P

    2016-03-01

    We present advanced particle image velocimetry (PIV) processing, post-processing, and uncertainty estimation techniques to support the validation of computational fluid dynamics analyses of medical devices. This work is an extension of a previous FDA-sponsored multi-laboratory study, which used a medical device mimicking geometry referred to as the FDA benchmark nozzle model. Experimental measurements were performed using time-resolved PIV at five overlapping regions of the model for Reynolds numbers in the nozzle throat of 500, 2000, 5000, and 8000. Images included a twofold increase in spatial resolution in comparison to the previous study. Data was processed using ensemble correlation, dynamic range enhancement, and phase correlations to increase signal-to-noise ratios and measurement accuracy, and to resolve flow regions with large velocity ranges and gradients, which is typical of many blood-contacting medical devices. Parameters relevant to device safety, including shear stress at the wall and in bulk flow, were computed using radial basis functions. In addition, in-field spatially resolved pressure distributions, Reynolds stresses, and energy dissipation rates were computed from PIV measurements. Velocity measurement uncertainty was estimated directly from the PIV correlation plane, and uncertainty analysis for wall shear stress at each measurement location was performed using a Monte Carlo model. Local velocity uncertainty varied greatly and depended largely on local conditions such as particle seeding, velocity gradients, and particle displacements. Uncertainty in low velocity regions in the sudden expansion section of the nozzle was greatly reduced by over an order of magnitude when dynamic range enhancement was applied. Wall shear stress uncertainty was dominated by uncertainty contributions from velocity estimations, which were shown to account for 90-99% of the total uncertainty. This study provides advancements in the PIV processing methodologies over

  9. Reduction of uncertainties on the components of the reactivity loss per cycle, the Balzac program on Masurca

    International Nuclear Information System (INIS)

    D'Angelo, A.; Karouby-Cohen, N.; Palmiotti, G.; Rimpault, G.; Salvatores, M.; Soule, R.

    1984-10-01

    The uncertainties on the reactivity loss per cycle are mainly due to the uncertainties on the heavy isotopes component. This paper, presents an experimental program for reducing these uncertainties. This program is based on a range of fuel irradiation experiments on power reactor and a range of isotopic variations experiments in the critical facility MASURCA consisting basically of subcritical measurements, from a reference configuration and in several different spectra

  10. Examining Dark Triad traits in relation to sleep disturbances, anxiety sensitivity and intolerance of uncertainty in young adults.

    Science.gov (United States)

    Sabouri, Sarah; Gerber, Markus; Lemola, Sakari; Becker, Stephen P; Shamsi, Mahin; Shakouri, Zeinab; Sadeghi Bahmani, Dena; Kalak, Nadeem; Holsboer-Trachsler, Edith; Brand, Serge

    2016-07-01

    The Dark Triad (DT) describes a set of three closely related personality traits, Machiavellianism, narcissism, and psychopathy. The aim of this study was to examine the associations between DT traits, sleep disturbances, anxiety sensitivity and intolerance of uncertainty. A total of 341 adults (M=29years) completed a series of questionnaires related to the DT traits, sleep disturbances, anxiety sensitivity, and intolerance of uncertainty. A higher DT total score was associated with increased sleep disturbances, and higher scores for anxiety sensitivity and intolerance of uncertainty. In regression analyses Machiavellianism and psychopathy were predictors of sleep disturbances, anxiety sensitivity, and intolerance of uncertainty. Results indicate that specific DT traits, namely Machiavellianism and psychopathy, are associated with sleep disturbances, anxiety sensitivity and intolerance of uncertainty in young adults. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Total-System Performance Assessment for the Yucca Mountain Site

    International Nuclear Information System (INIS)

    Wilson, M.L.

    2001-01-01

    Yucca Mountain, Nevada, is under consideration as a potential site for a repository for high-level radioactive waste. Total-system performance-assessment simulations are performed to evaluate the safety of the site. Features, events, and processes have been systematically evaluated to determine which ones are significant to the safety assessment. Computer models of the disposal system have been developed within a probabilistic framework, including both engineered and natural components. Selected results are presented for three different total-system simulations, and the behavior of the disposal system is discussed. The results show that risk is dominated by igneous activity at early times, because the robust waste-package design prevents significant nominal (non-disruptive) releases for tens of thousands of years or longer. The uncertainty in the nominal performance is dominated by uncertainties related to waste-package corrosion at early times and by uncertainties in the natural system, most significantly infiltration, at late times

  12. A polynomial chaos approach to the analysis of vehicle dynamics under uncertainty

    Science.gov (United States)

    Kewlani, Gaurav; Crawford, Justin; Iagnemma, Karl

    2012-05-01

    The ability of ground vehicles to quickly and accurately analyse their dynamic response to a given input is critical to their safety and efficient autonomous operation. In field conditions, significant uncertainty is associated with terrain and/or vehicle parameter estimates, and this uncertainty must be considered in the analysis of vehicle motion dynamics. Here, polynomial chaos approaches that explicitly consider parametric uncertainty during modelling of vehicle dynamics are presented. They are shown to be computationally more efficient than the standard Monte Carlo scheme, and experimental results compared with the simulation results performed on ANVEL (a vehicle simulator) indicate that the method can be utilised for efficient and accurate prediction of vehicle motion in realistic scenarios.

  13. Uncertainty and measurement

    International Nuclear Information System (INIS)

    Landsberg, P.T.

    1990-01-01

    This paper explores how the quantum mechanics uncertainty relation can be considered to result from measurements. A distinction is drawn between the uncertainties obtained by scrutinising experiments and the standard deviation type of uncertainty definition used in quantum formalism. (UK)

  14. Monte Carlo approaches for uncertainty quantification of criticality for system dimensions

    International Nuclear Information System (INIS)

    Kiedrowski, B.C.; Brown, F.B.

    2013-01-01

    One of the current challenges in nuclear engineering computations is the issue of performing uncertainty analysis for either calculations or experimental measurements. This paper specifically focuses on the issue of estimating the uncertainties arising from geometric tolerances. For this paper, two techniques for uncertainty quantification are studied. The first is the forward propagation technique, which can be thought of as a 'brute force' approach; uncertain system parameters are randomly sampled, the calculation is run, and uncertainties are found from the empirically obtained distribution of results. This approach need make no approximations in principle, but is very computationally expensive. The other approach investigated is the adjoint-based approach; system sensitivities are computed via a single Monte Carlo calculation and those are used with a covariance matrix to provide a linear estimate of the uncertainty. Demonstration calculations are performed with the MCNP6 code for both techniques. The 2 techniques are tested on 2 cases: the first case is a solid, bare cylinder of Pu-metal while the second case is a can of plutonium nitrate solution. The results show that the forward and adjoint approaches appear to agree in some cases where the responses are not non-linearly correlated. In other cases, the uncertainties in the effective multiplication k disagree for reasons not yet known

  15. SWEPP PAN assay system uncertainty analysis: Passive mode measurements of graphite waste

    International Nuclear Information System (INIS)

    Blackwood, L.G.; Harker, Y.D.; Meachum, T.R.; Yoon, Woo Y.

    1997-07-01

    The Idaho National Engineering and Environmental Laboratory is being used as a temporary storage facility for transuranic waste generated by the U.S. Nuclear Weapons program at the Rocky Flats Plant (RFP) in Golden, Colorado. Currently, there is a large effort in progress to prepare to ship this waste to the Waste Isolation Pilot Plant (WIPP) in Carlsbad, New Mexico. In order to meet the TRU Waste Characterization Quality Assurance Program Plan nondestructive assay compliance requirements and quality assurance objectives, it is necessary to determine the total uncertainty of the radioassay results produced by the Stored Waste Examination Pilot Plant (SWEPP) Passive Active Neutron (PAN) radioassay system. To this end a modified statistical sampling and verification approach has been developed to determine the total uncertainty of a PAN measurement. In this approach the total performance of the PAN nondestructive assay system is simulated using computer models of the assay system and the resultant output is compared with the known input to assess the total uncertainty. This paper is one of a series of reports quantifying the results of the uncertainty analysis of the PAN system measurements for specific waste types and measurement modes. In particular this report covers passive mode measurements of weapons grade plutonium-contaminated graphite molds contained in 208 liter drums (waste code 300). The validity of the simulation approach is verified by comparing simulated output against results from measurements using known plutonium sources and a surrogate graphite waste form drum. For actual graphite waste form conditions, a set of 50 cases covering a statistical sampling of the conditions exhibited in graphite wastes was compiled using a Latin hypercube statistical sampling approach

  16. On the predictivity of pore-scale simulations: estimating uncertainties with multilevel Monte Carlo

    KAUST Repository

    Icardi, Matteo

    2016-02-08

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another “equivalent” sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [2015. https://bitbucket.org/micardi/porescalemc.], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers

  17. Propagation of Nuclear Data Uncertainties in Integral Measurements by Monte-Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Noguere, G.; Bernard, D.; De Saint-Jean, C. [CEA Cadarache, 13 - Saint Paul lez Durance (France)

    2006-07-01

    Full text of the publication follows: The generation of Multi-group cross sections together with relevant uncertainties is fundamental to assess the quality of integral data. The key information that are needed to propagate the microscopic experimental uncertainties to macroscopic reactor calculations are (1) the experimental covariance matrices, (2) the correlations between the parameters of the model and (3) the covariance matrices for the multi-group cross sections. The propagation of microscopic errors by Monte-Carlo technique was applied to determine the accuracy of the integral trends provided by the OSMOSE experiment carried out in the MINERVE reactor of the CEA Cadarache. The technique consists in coupling resonance shape analysis and deterministic codes. The integral trend and its accuracy obtained on the {sup 237}Np(n,{gamma}) reaction will be presented. (author)

  18. Determination of uncertainties in the calculation of dose rates at transport and storage casks; Unsicherheiten bei der Berechnung von Dosisleistungen an Transport- und Lagerbehaeltern

    Energy Technology Data Exchange (ETDEWEB)

    Schloemer, Luc Laurent Alexander

    2014-12-17

    The compliance with the dose rate limits for transport and storage casks (TLB) for spent nuclear fuel from pressurised water reactors can be proved by calculation. This includes the determination of the radioactive sources and the shielding-capability of the cask. In this thesis the entire computational chain, which extends from the determination of the source terms to the final Monte-Carlo-transport-calculation is analysed and the arising uncertainties are quantified not only by benchmarks but also by variational calculi. The background of these analyses is that the comparison with measured dose rates at different TLBs shows an overestimation by the values calculated. Regarding the studies performed, the overestimation can be mainly explained by the detector characteristics for the measurement of the neutron dose rate and additionally in case of the gamma dose rates by the energy group structure, which the calculation is based on. It turns out that the consideration of the uncertainties occurring along the computational chain can lead to even greater overestimation. Concerning the dose rate calculation at cask loadings with spent uranium fuel assemblies an uncertainty of (({sup +21}{sub -28}) ±2) % (rel.) for the total gamma dose rate and of ({sup +28±23}{sub -55±4}) % (rel.) for the total neutron dose rate are estimated. For mixed-loadings with spent uranium and MOX fuel assemblies an uncertainty of ({sup +24±3}{sub -27±2}) % (rel.) for the total gamma dose rate and of ({sup +28±23}{sub -55±4}) % (rel.) for the total neutron dose rate are quantified. The results show that the computational chain has not to be modified, because the calculations performed lead to conservative dose rate predictions, even if high uncertainties at neutron dose rate measurements arise. Thus at first the uncertainties of the neutron dose rate measurement have to be decreased to enable a reduction of the overestimation of the calculated dose rate afterwards. In the present thesis

  19. A Bayesian statistical method for quantifying model form uncertainty and two model combination methods

    International Nuclear Information System (INIS)

    Park, Inseok; Grandhi, Ramana V.

    2014-01-01

    Apart from parametric uncertainty, model form uncertainty as well as prediction error may be involved in the analysis of engineering system. Model form uncertainty, inherently existing in selecting the best approximation from a model set cannot be ignored, especially when the predictions by competing models show significant differences. In this research, a methodology based on maximum likelihood estimation is presented to quantify model form uncertainty using the measured differences of experimental and model outcomes, and is compared with a fully Bayesian estimation to demonstrate its effectiveness. While a method called the adjustment factor approach is utilized to propagate model form uncertainty alone into the prediction of a system response, a method called model averaging is utilized to incorporate both model form uncertainty and prediction error into it. A numerical problem of concrete creep is used to demonstrate the processes for quantifying model form uncertainty and implementing the adjustment factor approach and model averaging. Finally, the presented methodology is applied to characterize the engineering benefits of a laser peening process

  20. A data-driven approach for modeling post-fire debris-flow volumes and their uncertainty

    Science.gov (United States)

    Friedel, Michael J.

    2011-01-01

    This study demonstrates the novel application of genetic programming to evolve nonlinear post-fire debris-flow volume equations from variables associated with a data-driven conceptual model of the western United States. The search space is constrained using a multi-component objective function that simultaneously minimizes root-mean squared and unit errors for the evolution of fittest equations. An optimization technique is then used to estimate the limits of nonlinear prediction uncertainty associated with the debris-flow equations. In contrast to a published multiple linear regression three-variable equation, linking basin area with slopes greater or equal to 30 percent, burn severity characterized as area burned moderate plus high, and total storm rainfall, the data-driven approach discovers many nonlinear and several dimensionally consistent equations that are unbiased and have less prediction uncertainty. Of the nonlinear equations, the best performance (lowest prediction uncertainty) is achieved when using three variables: average basin slope, total burned area, and total storm rainfall. Further reduction in uncertainty is possible for the nonlinear equations when dimensional consistency is not a priority and by subsequently applying a gradient solver to the fittest solutions. The data-driven modeling approach can be applied to nonlinear multivariate problems in all fields of study.

  1. Uncertainty of quantitative microbiological methods of pharmaceutical analysis.

    Science.gov (United States)

    Gunar, O V; Sakhno, N G

    2015-12-30

    The total uncertainty of quantitative microbiological methods, used in pharmaceutical analysis, consists of several components. The analysis of the most important sources of the quantitative microbiological methods variability demonstrated no effect of culture media and plate-count techniques in the estimation of microbial count while the highly significant effect of other factors (type of microorganism, pharmaceutical product and individual reading and interpreting errors) was established. The most appropriate method of statistical analysis of such data was ANOVA which enabled not only the effect of individual factors to be estimated but also their interactions. Considering all the elements of uncertainty and combining them mathematically the combined relative uncertainty of the test results was estimated both for method of quantitative examination of non-sterile pharmaceuticals and microbial count technique without any product. These data did not exceed 35%, appropriated for a traditional plate count methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Uncertainty as Knowledge: Constraints on Policy Choices Provided by Analysis of Uncertainty

    Science.gov (United States)

    Lewandowsky, S.; Risbey, J.; Smithson, M.; Newell, B. R.

    2012-12-01

    Uncertainty forms an integral part of climate science, and it is often cited in connection with arguments against mitigative action. We argue that an analysis of uncertainty must consider existing knowledge as well as uncertainty, and the two must be evaluated with respect to the outcomes and risks associated with possible policy options. Although risk judgments are inherently subjective, an analysis of the role of uncertainty within the climate system yields two constraints that are robust to a broad range of assumptions. Those constraints are that (a) greater uncertainty about the climate system is necessarily associated with greater expected damages from warming, and (b) greater uncertainty translates into a greater risk of the failure of mitigation efforts. These ordinal constraints are unaffected by subjective or cultural risk-perception factors, they are independent of the discount rate, and they are independent of the magnitude of the estimate for climate sensitivity. The constraints mean that any appeal to uncertainty must imply a stronger, rather than weaker, need to cut greenhouse gas emissions than in the absence of uncertainty.

  3. Laser tracker TSPI uncertainty quantification via centrifuge trajectory

    Science.gov (United States)

    Romero, Edward; Paez, Thomas; Brown, Timothy; Miller, Timothy

    2009-08-01

    Sandia National Laboratories currently utilizes two laser tracking systems to provide time-space-position-information (TSPI) and high speed digital imaging of test units under flight. These laser trackers have been in operation for decades under the premise of theoretical accuracies based on system design and operator estimates. Advances in optical imaging and atmospheric tracking technology have enabled opportunities to provide more precise six degree of freedom measurements from these trackers. Applying these technologies to the laser trackers requires quantified understanding of their current errors and uncertainty. It was well understood that an assortment of variables contributed to laser tracker uncertainty but the magnitude of these contributions was not quantified and documented. A series of experiments was performed at Sandia National Laboratories large centrifuge complex to quantify TSPI uncertainties of Sandia National Laboratories laser tracker III. The centrifuge was used to provide repeatable and economical test unit trajectories of a test-unit to use for TSPI comparison and uncertainty analysis. On a centrifuge, testunits undergo a known trajectory continuously with a known angular velocity. Each revolution may represent an independent test, which may be repeated many times over for magnitudes of data practical for statistical analysis. Previously these tests were performed at Sandia's rocket sled track facility but were found to be costly with challenges in the measurement ground truth TSPI. The centrifuge along with on-board measurement equipment was used to provide known ground truth position of test units. This paper discusses the experimental design and techniques used to arrive at measures of laser tracker error and uncertainty.

  4. A new uncertainty reduction method for PWR cores with erbia bearing fuel

    International Nuclear Information System (INIS)

    Takeda, Toshikazu; Sano, Tadafumi; Kitada, Takanori; Kuroishi, Takeshi; Yamasaki, Masatoshi; Unesaki, Hironobu

    2008-01-01

    The concept of a PWR with erbia bearing high burnup fuel has been proposed. The erbia is added to all fuel with over 5% 235 U enrichment to retain the neutronics characteristics to that within 5% 235 U enrichment. There is a problem of the prediction accuracy of the neutronics characteristics with erbia bearing fuel because of the short of experimental data of erbia bearing fuel. The purpose of the present work is to reduce the uncertainty. A new method has been proposed by combining the bias factor method and the cross section adjustment method. For the PWR core, the uncertainty reduction, which shows the rate of reduction of uncertainty, of the k eff is 0.865 by the present method and 0.801 by the conventional bias factor method. Thus the prediction uncertainties are reduced by the present method compared to the bias factor method. (authors)

  5. Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties

    International Nuclear Information System (INIS)

    Stoneking, M.R.; Den Hartog, D.J.

    1996-06-01

    The fitting of data by χ 2 -minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimates for the fit parameters. They compare this method with a χ 2 -minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than ∼20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers

  6. Comparison of the uncertainties of several European low-dose calibration facilities

    Science.gov (United States)

    Dombrowski, H.; Cornejo Díaz, N. A.; Toni, M. P.; Mihelic, M.; Röttger, A.

    2018-04-01

    The typical uncertainty of a low-dose rate calibration of a detector, which is calibrated in a dedicated secondary national calibration laboratory, is investigated, including measurements in the photon field of metrology institutes. Calibrations at low ambient dose equivalent rates (at the level of the natural ambient radiation) are needed when environmental radiation monitors are to be characterised. The uncertainties of calibration measurements in conventional irradiation facilities above ground are compared with those obtained in a low-dose rate irradiation facility located deep underground. Four laboratories quantitatively evaluated the uncertainties of their calibration facilities, in particular for calibrations at low dose rates (250 nSv/h and 1 μSv/h). For the first time, typical uncertainties of European calibration facilities are documented in a comparison and the main sources of uncertainty are revealed. All sources of uncertainties are analysed, including the irradiation geometry, scattering, deviations of real spectra from standardised spectra, etc. As a fundamental metrological consequence, no instrument calibrated in such a facility can have a lower total uncertainty in subsequent measurements. For the first time, the need to perform calibrations at very low dose rates (< 100 nSv/h) deep underground is underpinned on the basis of quantitative data.

  7. Proposed standardized definitions for vertical resolution and uncertainty in the NDACC lidar ozone and temperature algorithms - Part 3: Temperature uncertainty budget

    Science.gov (United States)

    Leblanc, Thierry; Sica, Robert J.; van Gijsel, Joanna A. E.; Haefele, Alexander; Payen, Guillaume; Liberti, Gianluigi

    2016-08-01

    typical of the NDACC temperature lidars transmitting at 355 nm. The combined temperature uncertainty ranges between 0.1 and 1 K below 60 km, with detection noise, saturation correction, and molecular extinction correction being the three dominant sources of uncertainty. Above 60 km and up to 10 km below the top of the profile, the total uncertainty increases exponentially from 1 to 10 K due to the combined effect of random noise and temperature tie-on. In the top 10 km of the profile, the accuracy of the profile mainly depends on that of the tie-on temperature. All other uncertainty components remain below 0.1 K throughout the entire profile (15-90 km), except the background noise correction uncertainty, which peaks around 0.3-0.5 K. It should be kept in mind that these quantitative estimates may be very different for other lidar instruments, depending on their altitude range and the wavelengths used.

  8. Aerosol-type retrieval and uncertainty quantification from OMI data

    Science.gov (United States)

    Kauppi, Anu; Kolmonen, Pekka; Laine, Marko; Tamminen, Johanna

    2017-11-01

    selection. The posterior probability distribution can provide a comprehensive characterisation of the uncertainty in this kind of problem for aerosol-type selection. As a result, the proposed method can account for the model error and also include the model selection uncertainty in the total uncertainty budget.

  9. Aerosol-type retrieval and uncertainty quantification from OMI data

    Directory of Open Access Journals (Sweden)

    A. Kauppi

    2017-11-01

    difficulty in model selection. The posterior probability distribution can provide a comprehensive characterisation of the uncertainty in this kind of problem for aerosol-type selection. As a result, the proposed method can account for the model error and also include the model selection uncertainty in the total uncertainty budget.

  10. Uncertainty in social dilemmas

    OpenAIRE

    Kwaadsteniet, Erik Willem de

    2007-01-01

    This dissertation focuses on social dilemmas, and more specifically, on environmental uncertainty in these dilemmas. Real-life social dilemma situations are often characterized by uncertainty. For example, fishermen mostly do not know the exact size of the fish population (i.e., resource size uncertainty). Several researchers have therefore asked themselves the question as to how such uncertainty influences people’s choice behavior. These researchers have repeatedly concluded that uncertainty...

  11. A new robust adaptive controller for vibration control of active engine mount subjected to large uncertainties

    International Nuclear Information System (INIS)

    Fakhari, Vahid; Choi, Seung-Bok; Cho, Chang-Hyun

    2015-01-01

    This work presents a new robust model reference adaptive control (MRAC) for vibration control caused from vehicle engine using an electromagnetic type of active engine mount. Vibration isolation performances of the active mount associated with the robust controller are evaluated in the presence of large uncertainties. As a first step, an active mount with linear solenoid actuator is prepared and its dynamic model is identified via experimental test. Subsequently, a new robust MRAC based on the gradient method with σ-modification is designed by selecting a proper reference model. In designing the robust adaptive control, structured (parametric) uncertainties in the stiffness of the passive part of the mount and in damping ratio of the active part of the mount are considered to investigate the robustness of the proposed controller. Experimental and simulation results are presented to evaluate performance focusing on the robustness behavior of the controller in the face of large uncertainties. The obtained results show that the proposed controller can sufficiently provide the robust vibration control performance even in the presence of large uncertainties showing an effective vibration isolation. (paper)

  12. Uncertainty theory

    CERN Document Server

    Liu, Baoding

    2015-01-01

    When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, c...

  13. Top down arsenic uncertainty measurement in water and sediments from Guarapiranga dam (Brazil)

    Science.gov (United States)

    Faustino, M. G.; Lange, C. N.; Monteiro, L. R.; Furusawa, H. A.; Marques, J. R.; Stellato, T. B.; Soares, S. M. V.; da Silva, T. B. S. C.; da Silva, D. B.; Cotrim, M. E. B.; Pires, M. A. F.

    2018-03-01

    Total arsenic measurements assessment regarding legal threshold demands more than average and standard deviation approach. In this way, analytical measurement uncertainty evaluation was conducted in order to comply with legal requirements and to allow the balance of arsenic in both water and sediment compartments. A top-down approach for measurement uncertainties was applied to evaluate arsenic concentrations in water and sediments from Guarapiranga dam (São Paulo, Brazil). Laboratory quality control and arsenic interlaboratory tests data were used in this approach to estimate the uncertainties associated with the methodology.

  14. The Uranie platform: an Open-source software for optimisation, meta-modelling and uncertainty analysis

    OpenAIRE

    Blanchard, J-B.; Damblin, G.; Martinez, J-M.; Arnaud, G.; Gaudier, F.

    2018-01-01

    The high-performance computing resources and the constant improvement of both numerical simulation accuracy and the experimental measurements with which they are confronted, bring a new compulsory step to strengthen the credence given to the simulation results: uncertainty quantification. This can have different meanings, according to the requested goals (rank uncertainty sources, reduce them, estimate precisely a critical threshold or an optimal working point) and it could request mathematic...

  15. Modeling of uncertainties in biochemical reactions.

    Science.gov (United States)

    Mišković, Ljubiša; Hatzimanikatis, Vassily

    2011-02-01

    Mathematical modeling is an indispensable tool for research and development in biotechnology and bioengineering. The formulation of kinetic models of biochemical networks depends on knowledge of the kinetic properties of the enzymes of the individual reactions. However, kinetic data acquired from experimental observations bring along uncertainties due to various experimental conditions and measurement methods. In this contribution, we propose a novel way to model the uncertainty in the enzyme kinetics and to predict quantitatively the responses of metabolic reactions to the changes in enzyme activities under uncertainty. The proposed methodology accounts explicitly for mechanistic properties of enzymes and physico-chemical and thermodynamic constraints, and is based on formalism from systems theory and metabolic control analysis. We achieve this by observing that kinetic responses of metabolic reactions depend: (i) on the distribution of the enzymes among their free form and all reactive states; (ii) on the equilibrium displacements of the overall reaction and that of the individual enzymatic steps; and (iii) on the net fluxes through the enzyme. Relying on this observation, we develop a novel, efficient Monte Carlo sampling procedure to generate all states within a metabolic reaction that satisfy imposed constrains. Thus, we derive the statistics of the expected responses of the metabolic reactions to changes in enzyme levels and activities, in the levels of metabolites, and in the values of the kinetic parameters. We present aspects of the proposed framework through an example of the fundamental three-step reversible enzymatic reaction mechanism. We demonstrate that the equilibrium displacements of the individual enzymatic steps have an important influence on kinetic responses of the enzyme. Furthermore, we derive the conditions that must be satisfied by a reversible three-step enzymatic reaction operating far away from the equilibrium in order to respond to

  16. Uncertainty and sensitivity analysis of chronic exposure results with the MACCS reactor accident consequence model

    International Nuclear Information System (INIS)

    Helton, J.C; Johnson, J.D; Rollstin, J.A; Shiver, A.W; Sprung, J.L

    1995-01-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the chronic exposure pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 75 imprecisely known input variables on the following reactor accident consequences are studied: crop growing-season dose, crop long-term dose, water ingestion dose, milk growing-season dose, long-term groundshine dose, long-term inhalation dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, total latent cancer fatalities, area-dependent cost, crop disposal cost, milk disposal cost, population-dependent cost, total economic cost, condemnation area, condemnation population, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: dry deposition velocity, transfer of cesium from animal feed to milk, transfer of cesium from animal feed to meet, ground concentration of Cs-134 at which the disposal of milk products will be initiated, transfer of Sr-90 from soil to legumes, maximum allowable ground concentration of Sr-90 for production of crops, fraction of cesium entering surface water that is consumed in drinking water, groundshine shielding factor, scale factor defining resuspension, dose reduction associated with decontamination, and ground concentration of I-131 at which disposal of crops will be initiated due to accidents that occur during the growing season. Reducing the uncertainty in the preceding variables was found to substantially reduce the uncertainty in the

  17. A novel dose uncertainty model and its application for dose verification

    International Nuclear Information System (INIS)

    Jin Hosang; Chung Heetaek; Liu Chihray; Palta, Jatinder; Suh, Tae-Suk; Kim, Siyong

    2005-01-01

    Based on statistical approach, a novel dose uncertainty model was introduced considering both nonspatial and spatial dose deviations. Non-space-oriented uncertainty is mainly caused by dosimetric uncertainties, and space-oriented dose uncertainty is the uncertainty caused by all spatial displacements. Assuming these two parts are independent, dose difference between measurement and calculation is a linear combination of nonspatial and spatial dose uncertainties. Two assumptions were made: (1) the relative standard deviation of nonspatial dose uncertainty is inversely proportional to the dose standard deviation σ, and (2) the spatial dose uncertainty is proportional to the gradient of dose. The total dose uncertainty is a quadratic sum of the nonspatial and spatial uncertainties. The uncertainty model provides the tolerance dose bound for comparison between calculation and measurement. In the statistical uncertainty model based on a Gaussian distribution, a confidence level of 3σ theoretically confines 99.74% of measurements within the bound. By setting the confidence limit, the tolerance bound for dose comparison can be made analogous to that of existing dose comparison methods (e.g., a composite distribution analysis, a γ test, a χ evaluation, and a normalized agreement test method). However, the model considers the inherent dose uncertainty characteristics of the test points by taking into account the space-specific history of dose accumulation, while the previous methods apply a single tolerance criterion to the points, although dose uncertainty at each point is significantly different from others. Three types of one-dimensional test dose distributions (a single large field, a composite flat field made by two identical beams, and three-beam intensity-modulated fields) were made to verify the robustness of the model. For each test distribution, the dose bound predicted by the uncertainty model was compared with simulated measurements. The simulated

  18. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  19. Entanglement criteria via the uncertainty relations in su(2) and su(1,1) algebras: Detection of non-Gaussian entangled states

    International Nuclear Information System (INIS)

    Nha, Hyunchul; Kim, Jaewan

    2006-01-01

    We derive a class of inequalities, from the uncertainty relations of the su(1,1) and the su(2) algebra in conjunction with partial transposition, that must be satisfied by any separable two-mode states. These inequalities are presented in terms of the su(2) operators J x =(a † b+ab † )/2, J y =(a † b-ab † )/2i, and the total photon number a +N b >. They include as special cases the inequality derived by Hillery and Zubairy [Phys. Rev. Lett. 96, 050503 (2006)], and the one by Agarwal and Biswas [New J. Phys. 7, 211 (2005)]. In particular, optimization over the whole inequalities leads to the criterion obtained by Agarwal and Biswas. We show that this optimal criterion can detect entanglement for a broad class of non-Gaussian entangled states, i.e., the su(2) minimum-uncertainty states. Experimental schemes to test the optimal criterion are also discussed, especially the one using linear optical devices and photodetectors

  20. New entropic uncertainty relations and tests of PMD-SQS-optimal limits in pion-nucleus scattering

    International Nuclear Information System (INIS)

    Ion, D.B.; Ion, M.L.

    2002-01-01

    In this paper we define a new kind of quantum entropy, namely, the nonextensivity conjugated entropy S Jθ (p,q) bar.Then we prove the optimal nonextensivity conjugated entropic uncertainty relations (ONC-EUR) as well as optimal nonextensivity conjugated entropic uncertainty bands (ONC E UB). The results of the first experimental test of ONC-EUB in the pion-nucleus scattering, obtained by using 49-sets of experimental phase shift analysis, are presented. So, strong evidences for the saturation of the PMD-SQS-optimum limit are obtained with high accuracy (confidence level > 99%) for the nonextensivities: 1/2 ≤ p ≤ 2/3 and q = p/(2p-1). (authors)

  1. Application of the emission inventory model TEAM: Uncertainties in dioxin emission estimates for central Europe

    NARCIS (Netherlands)

    Pulles, M.P.J.; Kok, H.; Quass, U.

    2006-01-01

    This study uses an improved emission inventory model to assess the uncertainties in emissions of dioxins and furans associated with both knowledge on the exact technologies and processes used, and with the uncertainties of both activity data and emission factors. The annual total emissions for the

  2. EXPERIMENTAL RESEARCH OF REGENERATIVE FEATURES IN BONE TISSUES AROUND IMPLANTS AFTER ONE-STAGE BILATERAL TOTAL HIP REPLACEMENT

    Directory of Open Access Journals (Sweden)

    V. M. Mashkov

    2012-01-01

    Full Text Available Objective: to research the specific features of regenerative processes of bone tissue around implants after one-stage bilateral total hip replacement in experiment. Material and methods: 27 total hip replacement operations have been performed in 18 rabbits of breed "chinchilla" to which bipolar femoral endoprosthesis made of titanic alloy PT-38, one type-size, with friction pair metal-on-metal and neck-shaft angle 165 degrees have been implanted: total unilateral hip replacement operations have been performed in 9 animals (control group, one-stage bilateral total hip replacement operations have been performed in 9 animals (experimental group. During research they have been on radiological and clinical checking-up. After the experiment the animals had histological tests of the tissues around endoprosthesis components. Results and conclusions: After one-stage bilateral total hip replacement in early terms of research more expressed changes of bone tissue in the form of its thinning and decompaction were found around implants. One-stage bilateral total hip replacement did not essentially influence on the speed of osteogenesis around endoprothesis components in comparison with unilateral total hip replacement, so in late terms of observation in both groups the fixing of endoprothesis components did not differ.

  3. Range uncertainties in proton therapy and the role of Monte Carlo simulations

    International Nuclear Information System (INIS)

    Paganetti, Harald

    2012-01-01

    The main advantages of proton therapy are the reduced total energy deposited in the patient as compared to photon techniques and the finite range of the proton beam. The latter adds an additional degree of freedom to treatment planning. The range in tissue is associated with considerable uncertainties caused by imaging, patient setup, beam delivery and dose calculation. Reducing the uncertainties would allow a reduction of the treatment volume and thus allow a better utilization of the advantages of protons. This paper summarizes the role of Monte Carlo simulations when aiming at a reduction of range uncertainties in proton therapy. Differences in dose calculation when comparing Monte Carlo with analytical algorithms are analyzed as well as range uncertainties due to material constants and CT conversion. Range uncertainties due to biological effects and the role of Monte Carlo for in vivo range verification are discussed. Furthermore, the current range uncertainty recipes used at several proton therapy facilities are revisited. We conclude that a significant impact of Monte Carlo dose calculation can be expected in complex geometries where local range uncertainties due to multiple Coulomb scattering will reduce the accuracy of analytical algorithms. In these cases Monte Carlo techniques might reduce the range uncertainty by several mm. (topical review)

  4. Moving Beyond 2% Uncertainty: A New Framework for Quantifying Lidar Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Newman, Jennifer F.; Clifton, Andrew

    2017-03-08

    Remote sensing of wind using lidar is revolutionizing wind energy. However, current generations of wind lidar are ascribed a climatic value of uncertainty, which is based on a poor description of lidar sensitivity to external conditions. In this presentation, we show how it is important to consider the complete lidar measurement process to define the measurement uncertainty, which in turn offers the ability to define a much more granular and dynamic measurement uncertainty. This approach is a progression from the 'white box' lidar uncertainty method.

  5. ThermoData Engine: Extension to Solvent Design and Multi-component Process Stream Property Calculations with Uncertainty Analysis

    DEFF Research Database (Denmark)

    Diky, Vladimir; Chirico, Robert D.; Muzny, Chris

    ThermoData Engine (TDE, NIST Standard Reference Databases 103a and 103b) is the first product that implements the concept of Dynamic Data Evaluation in the fields of thermophysics and thermochemistry, which includes maintaining the comprehensive and up-to-date database of experimentally measured...... property values and expert system for data analysis and generation of recommended property values at the specified conditions along with uncertainties on demand. The most recent extension of TDE covers solvent design and multi-component process stream property calculations with uncertainty analysis...... variations). Predictions can be compared to the available experimental data, and uncertainties are estimated for all efficiency criteria. Calculations of the properties of multi-component streams including composition at phase equilibria (flash calculations) are at the heart of process simulation engines...

  6. Evaluation Procedures of Random Uncertainties in Theoretical Calculations of Cross Sections and Rate Coefficients

    International Nuclear Information System (INIS)

    Kokoouline, V.; Richardson, W.

    2014-01-01

    Uncertainties in theoretical calculations may include: • Systematic uncertainty: Due to applicability limits of the chosen model. • Random: Within a model, uncertainties of model parameters result in uncertainties of final results (such as cross sections). • If uncertainties of experimental and theoretical data are known, for the purpose of data evaluation (to produce recommended data), one should combine two data sets to produce the best guess data with the smallest possible uncertainty. In many situations, it is possible to assess the accuracy of theoretical calculations because theoretical models usually rely on parameters that are uncertain, but not completely random, i.e. the uncertainties of the parameters of the models are approximately known. If there are one or several such parameters with corresponding uncertainties, even if some or all parameters are correlated, the above approach gives a conceptually simple way to calculate uncertainties of final cross sections (uncertainty propagation). Numerically, the statistical approach to the uncertainty propagation could be computationally expensive. However, in situations, where uncertainties are considered to be as important as the actual cross sections (for data validation or benchmark calculations, for example), such a numerical effort is justified. Having data from different sources (say, from theory and experiment), a systematic statistical approach allows one to compare the data and produce “unbiased” evaluated data with improved uncertainties, if uncertainties of initial data from different sources are available. Without uncertainties, the data evaluation/validation becomes impossible. This is the reason why theoreticians should assess the accuracy of their calculations in one way or another. A statistical and systematic approach, similar to the described above, is preferable.

  7. Model uncertainties of local-thermodynamic-equilibrium K-shell spectroscopy

    Science.gov (United States)

    Nagayama, T.; Bailey, J. E.; Mancini, R. C.; Iglesias, C. A.; Hansen, S. B.; Blancard, C.; Chung, H. K.; Colgan, J.; Cosse, Ph.; Faussurier, G.; Florido, R.; Fontes, C. J.; Gilleron, F.; Golovkin, I. E.; Kilcrease, D. P.; Loisel, G.; MacFarlane, J. J.; Pain, J.-C.; Rochau, G. A.; Sherrill, M. E.; Lee, R. W.

    2016-09-01

    Local-thermodynamic-equilibrium (LTE) K-shell spectroscopy is a common tool to diagnose electron density, ne, and electron temperature, Te, of high-energy-density (HED) plasmas. Knowing the accuracy of such diagnostics is important to provide quantitative conclusions of many HED-plasma research efforts. For example, Fe opacities were recently measured at multiple conditions at the Sandia National Laboratories Z machine (Bailey et al., 2015), showing significant disagreement with modeled opacities. Since the plasma conditions were measured using K-shell spectroscopy of tracer Mg (Nagayama et al., 2014), one concern is the accuracy of the inferred Fe conditions. In this article, we investigate the K-shell spectroscopy model uncertainties by analyzing the Mg spectra computed with 11 different models at the same conditions. We find that the inferred conditions differ by ±20-30% in ne and ±2-4% in Te depending on the choice of spectral model. Also, we find that half of the Te uncertainty comes from ne uncertainty. To refine the accuracy of the K-shell spectroscopy, it is important to scrutinize and experimentally validate line-shape theory. We investigate the impact of the inferred ne and Te model uncertainty on the Fe opacity measurements. Its impact is small and does not explain the reported discrepancies.

  8. Uncertainty and validation. Effect of model complexity on uncertainty estimates

    International Nuclear Information System (INIS)

    Elert, M.

    1996-09-01

    In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root

  9. Experimental Approach for the Uncertainty Assessment of 3D Complex Geometry Dimensional Measurements Using Computed Tomography at the mm and Sub-mm Scales.

    Science.gov (United States)

    Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A; Ontiveros, Sinué; Tosello, Guido

    2017-05-16

    The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems' traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile techniques, particularly when measuring miniaturized components with complex 3D geometries and their inability to measure inner parts. To validate the presented method, the most accepted standard currently available for CT sensors, the Verein Deutscher Ingenieure/Verband Deutscher Elektrotechniker (VDI/VDE) guideline 2630-2.1 is applied. Considering the high number of influence factors in CT and their impact on the measuring result, two different techniques for surface extraction are also considered to obtain a realistic determination of the influence of data processing on uncertainty. The uncertainty assessment of a workpiece used for micro mechanical material testing is firstly used to confirm the method, due to its feasible calibration by an optical CMS. Secondly, the measurement of a miniaturized dental file with 3D complex geometry is carried out. The estimated uncertainties are eventually compared with the component's calibration and the micro manufacturing tolerances to demonstrate the suitability of the presented CT calibration procedure. The 2U/T ratios resulting from the

  10. Phenomena-based Uncertainty Quantification in Predictive Coupled- Physics Reactor Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Marvin [Texas A & M Univ., College Station, TX (United States)

    2017-06-12

    This project has sought to develop methodologies, tailored to phenomena that govern nuclearreactor behavior, to produce predictions (including uncertainties) for quantities of interest (QOIs) in the simulation of steady-state and transient reactor behavior. Examples of such predictions include, for each QOI, an expected value as well as a distribution around this value and an assessment of how much of the distribution stems from each major source of uncertainty. The project has sought to test its methodologies by comparing against measured experimental outcomes. The main experimental platform has been a 1-MW TRIGA reactor. This is a flexible platform for a wide range of experiments, including steady state with and without temperature feedback, slow transients with and without feedback, and rapid transients with strong feedback. The original plan was for the primary experimental data to come from in-core neutron detectors. We made considerable progress toward this goal but did not get as far along as we had planned. We have designed, developed, installed, and tested vertical guide tubes, each able to accept a detector or stack of detectors that can be moved axially inside the tube, and we have tested several new detector designs. One of these shows considerable promise.

  11. Phenomena-based Uncertainty Quantification in Predictive Coupled- Physics Reactor Simulations

    International Nuclear Information System (INIS)

    Adams, Marvin

    2017-01-01

    This project has sought to develop methodologies, tailored to phenomena that govern nuclearreactor behavior, to produce predictions (including uncertainties) for quantities of interest (QOIs) in the simulation of steady-state and transient reactor behavior. Examples of such predictions include, for each QOI, an expected value as well as a distribution around this value and an assessment of how much of the distribution stems from each major source of uncertainty. The project has sought to test its methodologies by comparing against measured experimental outcomes. The main experimental platform has been a 1-MW TRIGA reactor. This is a flexible platform for a wide range of experiments, including steady state with and without temperature feedback, slow transients with and without feedback, and rapid transients with strong feedback. The original plan was for the primary experimental data to come from in-core neutron detectors. We made considerable progress toward this goal but did not get as far along as we had planned. We have designed, developed, installed, and tested vertical guide tubes, each able to accept a detector or stack of detectors that can be moved axially inside the tube, and we have tested several new detector designs. One of these shows considerable promise.

  12. Uncertainties in Cancer Risk Coefficients for Environmental Exposure to Radionuclides. An Uncertainty Analysis for Risk Coefficients Reported in Federal Guidance Report No. 13

    Energy Technology Data Exchange (ETDEWEB)

    Pawel, David [U.S. Environmental Protection Agency; Leggett, Richard Wayne [ORNL; Eckerman, Keith F [ORNL; Nelson, Christopher [U.S. Environmental Protection Agency

    2007-01-01

    Federal Guidance Report No. 13 (FGR 13) provides risk coefficients for estimation of the risk of cancer due to low-level exposure to each of more than 800 radionuclides. Uncertainties in risk coefficients were quantified in FGR 13 for 33 cases (exposure to each of 11 radionuclides by each of three exposure pathways) on the basis of sensitivity analyses in which various combinations of plausible biokinetic, dosimetric, and radiation risk models were used to generate alternative risk coefficients. The present report updates the uncertainty analysis in FGR 13 for the cases of inhalation and ingestion of radionuclides and expands the analysis to all radionuclides addressed in that report. The analysis indicates that most risk coefficients for inhalation or ingestion of radionuclides are determined within a factor of 5 or less by current information. That is, application of alternate plausible biokinetic and dosimetric models and radiation risk models (based on the linear, no-threshold hypothesis with an adjustment for the dose and dose rate effectiveness factor) is unlikely to change these coefficients by more than a factor of 5. In this analysis the assessed uncertainty in the radiation risk model was found to be the main determinant of the uncertainty category for most risk coefficients, but conclusions concerning the relative contributions of risk and dose models to the total uncertainty in a risk coefficient may depend strongly on the method of assessing uncertainties in the risk model.

  13. A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report: Updated in 2016

    Energy Technology Data Exchange (ETDEWEB)

    Sisterson, Douglas [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-01-15

    The U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility is observationally based, and quantifying the uncertainty of its measurements is critically important. With over 300 widely differing instruments providing over 2,500 datastreams, concise expression of measurement uncertainty is quite challenging. ARM currently provides data and supporting metadata (information about the data or data quality) to its users through several sources. Because the continued success of the ARM Facility depends on the known quality of its measurements, ARM relies on Instrument Mentors and the ARM Data Quality Office to ensure, assess, and report measurement quality. Therefore, an easily accessible, well-articulated estimate of ARM measurement uncertainty is needed. This report is a continuation of the work presented by Campos and Sisterson (2015) and provides additional uncertainty information from instruments not available in their report. As before, a total measurement uncertainty has been calculated as a function of the instrument uncertainty (calibration factors), the field uncertainty (environmental factors), and the retrieval uncertainty (algorithm factors). This study will not expand on methods for computing these uncertainties. As before, it will focus on the practical identification, characterization, and inventory of the measurement uncertainties already available to the ARM community through the ARM Instrument Mentors and their ARM instrument handbooks. This study continues the first steps towards reporting ARM measurement uncertainty as: (1) identifying how the uncertainty of individual ARM measurements is currently expressed, (2) identifying a consistent approach to measurement uncertainty, and then (3) reclassifying ARM instrument measurement uncertainties in a common framework.

  14. Horizontal scale calibration of theodolites and total station using a gauge index table

    International Nuclear Information System (INIS)

    Vieira, L H B; Filho, W L O; Barros, W S

    2015-01-01

    This paper shows a methodology to calibrate the horizontal scale of theodolites and total station using a high accuracy index table. The calibration pursued the method of circular scales and precision polygons (also called Rosette Method [1] or multistep). This method consists in the angle comparison of two circular divisions in all relative positions possibilities. Index table errors and theodolite horizontal scale errors were obtained using the method of least squares which is used to process the data from Rosette Method. An experimental setup was used to evaluate this methodology and the details of the mechanical assembly are also described in this paper. Several theodolites and total stations were calibrated using the proposed system and the results infer that the method is suitable to calibrate the different models available in the market. The system showed good stability over time with measurements uncertainties around 1' (one second) depending on instrument features. (paper)

  15. The deuteron-radius puzzle is alive: A new analysis of nuclear structure uncertainties

    Science.gov (United States)

    Hernandez, O. J.; Ekström, A.; Nevo Dinur, N.; Ji, C.; Bacca, S.; Barnea, N.

    2018-03-01

    To shed light on the deuteron radius puzzle we analyze the theoretical uncertainties of the nuclear structure corrections to the Lamb shift in muonic deuterium. We find that the discrepancy between the calculated two-photon exchange correction and the corresponding experimentally inferred value by Pohl et al. [1] remain. The present result is consistent with our previous estimate, although the discrepancy is reduced from 2.6 σ to about 2 σ. The error analysis includes statistic as well as systematic uncertainties stemming from the use of nucleon-nucleon interactions derived from chiral effective field theory at various orders. We therefore conclude that nuclear theory uncertainty is more likely not the source of the discrepancy.

  16. Development of Uncertainty Quantification Method for MIR-PIV Measurement using BOS Technique

    International Nuclear Information System (INIS)

    Seong, Jee Hyun; Song, Min Seop; Kim, Eung Soo

    2014-01-01

    Matching Index of Refraction (MIR) is frequently used for obtaining high quality PIV measurement data. ven small distortion by unmatched refraction index of test section can result in uncertainty problems. In this context, it is desirable to construct new concept for checking errors of MIR and following uncertainty of PIV measurement. This paper proposes a couple of experimental concept and relative results. This study developed an MIR uncertainty quantification method for PIV measurement using SBOS technique. From the reference data of the BOS, the reliable SBOS experiment procedure was constructed. Then with the combination of SBOS technique with MIR-PIV technique, velocity vector and refraction displacement vector field was measured simultaneously. MIR errors are calculated through mathematical equation, in which PIV and SBOS data are put. These errors are also verified by another BOS experiment. Finally, with the applying of calculated MIR-PIV uncertainty, correct velocity vector field can be obtained regardless of MIR errors

  17. Measure of total nuclear absorption cross sections of photons in the Δ (1232) range. Studied nucleus 12C and 208Pb

    International Nuclear Information System (INIS)

    Ghedira, Lotfi

    1984-01-01

    We present a new determination of the nuclear total photoabsorption cross section, on carbon and lead in the Δ 33 resonance region: 133 MeV ≤ E γ ≤ 531 MeV. The principle of the experimental method is the measurement of the hadronic products emitted during the absorption process. We used the Saclay tagged photon beam obtained by flight annihilation of positrons. The reaction target was placed inside a cylindrical Nal scintillator, covering 93 pc of 4π. A set of plastic scintillators covered the inside of the detector in order to determine the charge multiplicity of the hadronic events. The very large efficiency of this detector allowed the detection of 70 to 80 pc of the total number of hadronic events, so that the uncertainties induced by the extrapolation of the undetected part of the cross section are small. The total hadronic cross section was found to be close to 400 μb/ nucleon at the maximum for both C and Pb. In addition to the total cross section we give values for a few partial cross sections. Previous experimental work as well as theoretical predictions are compared to our results. (author) [fr

  18. Introducing nonpoint source transferable quotas in nitrogen trading: The effects of transaction costs and uncertainty.

    Science.gov (United States)

    Zhou, Xiuru; Ye, Weili; Zhang, Bing

    2016-03-01

    Transaction costs and uncertainty are considered to be significant obstacles in the emissions trading market, especially for including nonpoint source in water quality trading. This study develops a nonlinear programming model to simulate how uncertainty and transaction costs affect the performance of point/nonpoint source (PS/NPS) water quality trading in the Lake Tai watershed, China. The results demonstrate that PS/NPS water quality trading is a highly cost-effective instrument for emissions abatement in the Lake Tai watershed, which can save 89.33% on pollution abatement costs compared to trading only between nonpoint sources. However, uncertainty can significantly reduce the cost-effectiveness by reducing trading volume. In addition, transaction costs from bargaining and decision making raise total pollution abatement costs directly and cause the offset system to deviate from the optimal state. While proper investment in monitoring and measuring of nonpoint emissions can decrease uncertainty and save on the total abatement costs. Finally, we show that the dispersed ownership of China's farmland will bring high uncertainty and transaction costs into the PS/NPS offset system, even if the pollution abatement cost is lower than for point sources. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Controlling quantum memory-assisted entropic uncertainty in non-Markovian environments

    Science.gov (United States)

    Zhang, Yanliang; Fang, Maofa; Kang, Guodong; Zhou, Qingping

    2018-03-01

    Quantum memory-assisted entropic uncertainty relation (QMA EUR) addresses that the lower bound of Maassen and Uffink's entropic uncertainty relation (without quantum memory) can be broken. In this paper, we investigated the dynamical features of QMA EUR in the Markovian and non-Markovian dissipative environments. It is found that dynamical process of QMA EUR is oscillation in non-Markovian environment, and the strong interaction is favorable for suppressing the amount of entropic uncertainty. Furthermore, we presented two schemes by means of prior weak measurement and posterior weak measurement reversal to control the amount of entropic uncertainty of Pauli observables in dissipative environments. The numerical results show that the prior weak measurement can effectively reduce the wave peak values of the QMA-EUA dynamic process in non-Markovian environment for long periods of time, but it is ineffectual on the wave minima of dynamic process. However, the posterior weak measurement reversal has an opposite effects on the dynamic process. Moreover, the success probability entirely depends on the quantum measurement strength. We hope that our proposal could be verified experimentally and might possibly have future applications in quantum information processing.

  20. Uncertainty Reduction Via Parameter Design of A Fast Digital Integrator for Magnetic Field Measurement

    CERN Document Server

    Arpaia, P; Lucariello, G; Spiezia, G

    2007-01-01

    At European Centre of Nuclear Research (CERN), within the new Large Hadron Collider (LHC) project, measurements of magnetic flux with uncertainty of 10 ppm at a few of decades of Hz for several minutes are required. With this aim, a new Fast Digital Integrator (FDI) has been developed in cooperation with University of Sannio, Italy [1]. This paper deals with the final design tuning for achieving target uncertainty by means of experimental statistical parameter design.

  1. Effects of uncertainties of experimental data in the benchmarking of a computer code

    International Nuclear Information System (INIS)

    Meulemeester, E. de; Bouffioux, P.; Demeester, J.

    1980-01-01

    Fuel rod performance modelling is sometimes taken in an academical way. The experience of the COMETHE code development since 1967 has clearly shown that benchmarking was the most important part of modelling development. Unfortunately, it requires well characterized data. Although, the two examples presented here were not intended for benchmarking, as the COMETHE calculations were only performed for an interpretation of the results, they illustrate the effects of a lack of fuel characterization and of the power history uncertainties

  2. PIV Uncertainty Methodologies for CFD Code Validation at the MIR Facility

    Energy Technology Data Exchange (ETDEWEB)

    Sabharwall, Piyush [Idaho National Lab. (INL), Idaho Falls, ID (United States); Skifton, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States); Stoots, Carl [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kim, Eung Soo [Idaho National Lab. (INL), Idaho Falls, ID (United States); Conder, Thomas [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2013-12-01

    Currently, computational fluid dynamics (CFD) is widely used in the nuclear thermal hydraulics field for design and safety analyses. To validate CFD codes, high quality multi dimensional flow field data are essential. The Matched Index of Refraction (MIR) Flow Facility at Idaho National Laboratory has a unique capability to contribute to the development of validated CFD codes through the use of Particle Image Velocimetry (PIV). The significance of the MIR facility is that it permits non intrusive velocity measurement techniques, such as PIV, through complex models without requiring probes and other instrumentation that disturb the flow. At the heart of any PIV calculation is the cross-correlation, which is used to estimate the displacement of particles in some small part of the image over the time span between two images. This image displacement is indicated by the location of the largest peak. In the MIR facility, uncertainty quantification is a challenging task due to the use of optical measurement techniques. Currently, this study is developing a reliable method to analyze uncertainty and sensitivity of the measured data and develop a computer code to automatically analyze the uncertainty/sensitivity of the measured data. The main objective of this study is to develop a well established uncertainty quantification method for the MIR Flow Facility, which consists of many complicated uncertainty factors. In this study, the uncertainty sources are resolved in depth by categorizing them into uncertainties from the MIR flow loop and PIV system (including particle motion, image distortion, and data processing). Then, each uncertainty source is mathematically modeled or adequately defined. Finally, this study will provide a method and procedure to quantify the experimental uncertainty in the MIR Flow Facility with sample test results.

  3. Uncertainties on decay heat power due to fission product data uncertainties; Incertitudes sur la puissance residuelle dues aux incertitudes sur les donnees de produits de fission

    Energy Technology Data Exchange (ETDEWEB)

    Rebah, J

    1998-08-01

    Following a reactor shutdown, after the fission process has completely faded out, a significant quantity of energy known as 'decay heat' continues to be generated in the core. The knowledge with a good precision of the decay heat released in a fuel after reactor shutdown is necessary for: residual heat removal for normal operation or emergency shutdown condition, the design of cooling systems and spent fuel handling. By the summation calculations method, the decay heat is equal to the sum of the energies released by individual fission products. Under taking into account all nuclides that contribute significantly to the total decay heat, the results from summation method are comparable with the measured ones. Without the complete covariance information of nuclear data, the published uncertainty analyses of fission products decay heat summation calculation give underestimated errors through the variance/covariance analysis in consideration of correlation between the basic nuclear data, we calculate in this work the uncertainties on the decay heat associated with the summation calculations. Contribution to the total error of decay heat comes from uncertainties in three terms: fission yields, half-lives and average beta and gamma decay energy. (author)

  4. Measurement of the total cross section from elastic scattering in $pp$ collisions at $\\sqrt{s}=8$ TeV with the ATLAS detector

    CERN Document Server

    Aaboud, Morad; Abbott, Brad; Abdallah, Jalal; Abdinov, Ovsat; Abeloos, Baptiste; Aben, Rosemarie; AbouZeid, Ossama; Abraham, Nicola; Abramowicz, Halina; Abreu, Henso; Abreu, Ricardo; Abulaiti, Yiming; Acharya, Bobby Samir; Adachi, Shunsuke; Adamczyk, Leszek; Adams, David; Adelman, Jahred; Adomeit, Stefanie; Adye, Tim; Affolder, Tony; Agatonovic-Jovin, Tatjana; Agricola, Johannes; Aguilar-Saavedra, Juan Antonio; Ahlen, Steven; Ahmadov, Faig; Aielli, Giulio; Akerstedt, Henrik; Åkesson, Torsten Paul Ake; Akimov, Andrei; Alberghi, Gian Luigi; Albert, Justin; Albrand, Solveig; Alconada Verzini, Maria Josefina; Aleksa, Martin; Aleksandrov, Igor; Alexa, Calin; Alexander, Gideon; Alexopoulos, Theodoros; Alhroob, Muhammad; Ali, Babar; Aliev, Malik; Alimonti, Gianluca; Alison, John; Alkire, Steven Patrick; Allbrooke, Benedict; Allen, Benjamin William; Allport, Phillip; Aloisio, Alberto; Alonso, Alejandro; Alonso, Francisco; Alpigiani, Cristiano; Alshehri, Azzah Aziz; Alstaty, Mahmoud; Alvarez Gonzalez, Barbara; Άlvarez Piqueras, Damián; Alviggi, Mariagrazia; Amadio, Brian Thomas; Amako, Katsuya; Amaral Coutinho, Yara; Amelung, Christoph; Amidei, Dante; Amor Dos Santos, Susana Patricia; Amorim, Antonio; Amoroso, Simone; Amundsen, Glenn; Anastopoulos, Christos; Ancu, Lucian Stefan; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anders, Gabriel; Anders, John Kenneth; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Angelidakis, Stylianos; Angelozzi, Ivan; Anger, Philipp; Angerami, Aaron; Anghinolfi, Francis; Anisenkov, Alexey; Anjos, Nuno; Annovi, Alberto; Antel, Claire; Antonelli, Mario; Antonov, Alexey; Anulli, Fabio; Aoki, Masato; Aperio Bella, Ludovica; Arabidze, Giorgi; Arai, Yasuo; Araque, Juan Pedro; Arce, Ayana; Arduh, Francisco Anuar; Arguin, Jean-Francois; Argyropoulos, Spyridon; Arik, Metin; Armbruster, Aaron James; Armitage, Lewis James; Arnaez, Olivier; Arnold, Hannah; Arratia, Miguel; Arslan, Ozan; Artamonov, Andrei; Artoni, Giacomo; Artz, Sebastian; Asai, Shoji; Asbah, Nedaa; Ashkenazi, Adi; Åsman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astalos, Robert; Atkinson, Markus; Atlay, Naim Bora; Augsten, Kamil; Avolio, Giuseppe; Axen, Bradley; Ayoub, Mohamad Kassem; Azuelos, Georges; Baak, Max; Baas, Alessandra; Baca, Matthew John; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Backhaus, Malte; Bagiacchi, Paolo; Bagnaia, Paolo; Bai, Yu; Baines, John; Baker, Oliver Keith; Baldin, Evgenii; Balek, Petr; Balestri, Thomas; Balli, Fabrice; Balunas, William Keaton; Banas, Elzbieta; Banerjee, Swagato; Bannoura, Arwa A E; Barak, Liron; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Barillari, Teresa; Barisits, Martin-Stefan; Barklow, Timothy; Barlow, Nick; Barnes, Sarah Louise; Barnett, Bruce; Barnett, Michael; Barnovska-Blenessy, Zuzana; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barranco Navarro, Laura; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Bartoldus, Rainer; Barton, Adam Edward; Bartos, Pavol; Basalaev, Artem; Bassalat, Ahmed; Bates, Richard; Batista, Santiago Juan; Batley, Richard; Battaglia, Marco; Bauce, Matteo; Bauer, Florian; Bawa, Harinder Singh; Beacham, James; Beattie, Michael David; Beau, Tristan; Beauchemin, Pierre-Hugues; Bechtle, Philip; Beck, Hans~Peter; Becker, Kathrin; Becker, Maurice; Beckingham, Matthew; Becot, Cyril; Beddall, Andrew; Beddall, Ayda; Bednyakov, Vadim; Bedognetti, Matteo; Bee, Christopher; Beemster, Lars; Beermann, Thomas; Begel, Michael; Behr, Janna Katharina; Belanger-Champagne, Camille; Bell, Andrew Stuart; Bella, Gideon; Bellagamba, Lorenzo; Bellerive, Alain; Bellomo, Massimiliano; Belotskiy, Konstantin; Beltramello, Olga; Belyaev, Nikita; Benary, Odette; Benchekroun, Driss; Bender, Michael; Bendtz, Katarina; Benekos, Nektarios; Benhammou, Yan; Benhar Noccioli, Eleonora; Benitez, Jose; Benjamin, Douglas; Bensinger, James; Bentvelsen, Stan; Beresford, Lydia; Beretta, Matteo; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Beringer, Jürg; Berlendis, Simon; Bernard, Nathan Rogers; Bernius, Catrin; Bernlochner, Florian Urs; Berry, Tracey; Berta, Peter; Bertella, Claudia; Bertoli, Gabriele; Bertolucci, Federico; Bertram, Iain Alexander; Bertsche, Carolyn; Bertsche, David; Besjes, Geert-Jan; Bessidskaia Bylund, Olga; Bessner, Martin Florian; Besson, Nathalie; Betancourt, Christopher; Bethani, Agni; Bethke, Siegfried; Bevan, Adrian John; Bianchi, Riccardo-Maria; Bianchini, Louis; Bianco, Michele; Biebel, Otmar; Biedermann, Dustin; Bielski, Rafal; Biesuz, Nicolo Vladi; Biglietti, Michela; Bilbao De Mendizabal, Javier; Billoud, Thomas Remy Victor; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Biondi, Silvia; Bisanz, Tobias; Bjergaard, David Martin; Black, Curtis; Black, James; Black, Kevin; Blackburn, Daniel; Blair, Robert; Blanchard, Jean-Baptiste; Blazek, Tomas; Bloch, Ingo; Blocker, Craig; Blue, Andrew; Blum, Walter; Blumenschein, Ulrike; Blunier, Sylvain; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Bock, Christopher; Boehler, Michael; Boerner, Daniela; Bogaerts, Joannes Andreas; Bogavac, Danijela; Bogdanchikov, Alexander; Bohm, Christian; Boisvert, Veronique; Bokan, Petar; Bold, Tomasz; Boldyrev, Alexey; Bomben, Marco; Bona, Marcella; Boonekamp, Maarten; Borisov, Anatoly; Borissov, Guennadi; Bortfeldt, Jonathan; Bortoletto, Daniela; Bortolotto, Valerio; Bos, Kors; Boscherini, Davide; Bosman, Martine; Bossio Sola, Jonathan David; Boudreau, Joseph; Bouffard, Julian; Bouhova-Thacker, Evelina Vassileva; Boumediene, Djamel Eddine; Bourdarios, Claire; Boutle, Sarah Kate; Boveia, Antonio; Boyd, James; Boyko, Igor; Bracinik, Juraj; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Breaden Madden, William Dmitri; Brendlinger, Kurt; Brennan, Amelia Jean; Brenner, Lydia; Brenner, Richard; Bressler, Shikma; Bristow, Timothy Michael; Britton, Dave; Britzger, Daniel; Brochu, Frederic; Brock, Ian; Brock, Raymond; Brooijmans, Gustaaf; Brooks, Timothy; Brooks, William; Brosamer, Jacquelyn; Brost, Elizabeth; Broughton, James; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Bruni, Alessia; Bruni, Graziano; Bruni, Lucrezia Stella; Brunt, Benjamin; Bruschi, Marco; Bruscino, Nello; Bryant, Patrick; Bryngemark, Lene; Buanes, Trygve; Buat, Quentin; Buchholz, Peter; Buckley, Andrew; Budagov, Ioulian; Buehrer, Felix; Bugge, Magnar Kopangen; Bulekov, Oleg; Bullock, Daniel; Burckhart, Helfried; Burdin, Sergey; Burgard, Carsten Daniel; Burghgrave, Blake; Burka, Klaudia; Burke, Stephen; Burmeister, Ingo; Burr, Jonathan Thomas Peter; Busato, Emmanuel; Büscher, Daniel; Büscher, Volker; Bussey, Peter; Butler, John; Buttar, Craig; Butterworth, Jonathan; Butti, Pierfrancesco; Buttinger, William; Buzatu, Adrian; Buzykaev, Aleksey; Cabras, Grazia; Cabrera Urbán, Susana; Caforio, Davide; Cairo, Valentina; Cakir, Orhan; Calace, Noemi; Calafiura, Paolo; Calandri, Alessandro; Calderini, Giovanni; Calfayan, Philippe; Callea, Giuseppe; Caloba, Luiz; Calvente Lopez, Sergio; Calvet, David; Calvet, Samuel; Calvet, Thomas Philippe; Camacho Toro, Reina; Camarda, Stefano; Camarri, Paolo; Cameron, David; Caminal Armadans, Roger; Camincher, Clement; Campana, Simone; Campanelli, Mario; Camplani, Alessandra; Campoverde, Angel; Canale, Vincenzo; Canepa, Anadi; Cano Bret, Marc; Cantero, Josu; Cao, Tingting; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capua, Marcella; Carbone, Ryne Michael; Cardarelli, Roberto; Cardillo, Fabio; Carli, Ina; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Sascha; Carquin, Edson; Carrillo-Montoya, German D; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Casolino, Mirkoantonio; Casper, David William; Castaneda-Miranda, Elizabeth; Castelijn, Remco; Castelli, Angelantonio; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Caudron, Julien; Cavaliere, Viviana; Cavallaro, Emanuele; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Ceradini, Filippo; Cerda Alberich, Leonor; Cerio, Benjamin; Santiago Cerqueira, Augusto; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cerv, Matevz; Cervelli, Alberto; Cetin, Serkant Ali; Chafaq, Aziz; Chakraborty, Dhiman; Chan, Stephen Kam-wah; Chan, Yat Long; Chang, Philip; Chapman, John Derek; Charlton, Dave; Chatterjee, Avishek; Chau, Chav Chhiv; Chavez Barajas, Carlos Alberto; Che, Siinn; Cheatham, Susan; Chegwidden, Andrew; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Chunhui; Chen, Hucheng; Chen, Karen; Chen, Shenjian; Chen, Shion; Chen, Xin; Chen, Ye; Cheng, Hok Chuen; Cheng, Huajie; Cheng, Yangyang; Cheplakov, Alexander; Cheremushkina, Evgenia; Cherkaoui El Moursli, Rajaa; Chernyatin, Valeriy; Cheu, Elliott; Chevalier, Laurent; Chiarella, Vitaliano; Chiarelli, Giorgio; Chiodini, Gabriele; Chisholm, Andrew; Chitan, Adrian; Chizhov, Mihail; Choi, Kyungeon; Chomont, Arthur Rene; Chouridou, Sofia; Chow, Bonnie Kar Bo; Christodoulou, Valentinos; Chromek-Burckhart, Doris; Chudoba, Jiri; Chuinard, Annabelle Julia; Chwastowski, Janusz; Chytka, Ladislav; Ciapetti, Guido; Ciftci, Abbas Kenan; Cinca, Diane; Cindro, Vladimir; Cioara, Irina Antonela; Ciocca, Claudia; Ciocio, Alessandra; Cirotto, Francesco; Citron, Zvi Hirsh; Citterio, Mauro; Ciubancan, Mihai; Clark, Allan G; Clark, Brian Lee; Clark, Michael; Clark, Philip James; Clarke, Robert; Clement, Christophe; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Colasurdo, Luca; Cole, Brian; Colijn, Auke-Pieter; Collot, Johann; Colombo, Tommaso; Compostella, Gabriele; Conde Muiño, Patricia; Coniavitis, Elias; Connell, Simon Henry; Connelly, Ian; Consorti, Valerio; Constantinescu, Serban; Conti, Geraldine; Conventi, Francesco; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cormier, Kyle James Read; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Corso-Radu, Alina; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Cottin, Giovanna; Cowan, Glen; Cox, Brian; Cranmer, Kyle; Crawley, Samuel Joseph; Cree, Graham; Crépé-Renaudin, Sabine; Crescioli, Francesco; Cribbs, Wayne Allen; Crispin Ortuzar, Mireia; Cristinziani, Markus; Croft, Vince; Crosetti, Giovanni; Cueto, Ana; Cuhadar Donszelmann, Tulay; Cummings, Jane; Curatolo, Maria; Cúth, Jakub; Czirr, Hendrik; Czodrowski, Patrick; D'amen, Gabriele; D'Auria, Saverio; D'Onofrio, Monica; Da Cunha Sargedas De Sousa, Mario Jose; Da Via, Cinzia; Dabrowski, Wladyslaw; Dado, Tomas; Dai, Tiesheng; Dale, Orjan; Dallaire, Frederick; Dallapiccola, Carlo; Dam, Mogens; Dandoy, Jeffrey Rogers; Dang, Nguyen Phuong; Daniells, Andrew Christopher; Dann, Nicholas Stuart; Danninger, Matthias; Dano Hoffmann, Maria; Dao, Valerio; Darbo, Giovanni; Darmora, Smita; Dassoulas, James; Dattagupta, Aparajita; Davey, Will; David, Claire; Davidek, Tomas; Davies, Merlin; Davison, Peter; Dawe, Edmund; Dawson, Ian; De, Kaushik; de Asmundis, Riccardo; De Benedetti, Abraham; De Castro, Stefano; De Cecco, Sandro; De Groot, Nicolo; de Jong, Paul; De la Torre, Hector; De Lorenzi, Francesco; De Maria, Antonio; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; Dearnaley, William James; Debbe, Ramiro; Debenedetti, Chiara; Dedovich, Dmitri; Dehghanian, Nooshin; Deigaard, Ingrid; Del Gaudio, Michela; Del Peso, Jose; Del Prete, Tarcisio; Delgove, David; Deliot, Frederic; Delitzsch, Chris Malena; Dell'Acqua, Andrea; Dell'Asta, Lidia; Dell'Orso, Mauro; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delsart, Pierre-Antoine; DeMarco, David; Demers, Sarah; Demichev, Mikhail; Demilly, Aurelien; Denisov, Sergey; Denysiuk, Denys; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Deterre, Cecile; Dette, Karola; Deviveiros, Pier-Olivier; Dewhurst, Alastair; Dhaliwal, Saminder; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Clemente, William Kennedy; Di Donato, Camilla; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Micco, Biagio; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Di Valentino, David; Diaconu, Cristinel; Diamond, Miriam; Dias, Flavia; Diaz, Marco Aurelio; Diehl, Edward; Dietrich, Janet; Díez Cornell, Sergio; Dimitrievska, Aleksandra; Dingfelder, Jochen; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djobava, Tamar; Djuvsland, Julia Isabell; Barros do Vale, Maria Aline; Dobos, Daniel; Dobre, Monica; Doglioni, Caterina; Dolejsi, Jiri; Dolezal, Zdenek; Donadelli, Marisilvia; Donati, Simone; Dondero, Paolo; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dova, Maria-Teresa; Doyle, Tony; Drechsler, Eric; Dris, Manolis; Du, Yanyan; Duarte-Campderros, Jorge; Duchovni, Ehud; Duckeck, Guenter; Ducu, Otilia Anamaria; Duda, Dominik; Dudarev, Alexey; Dudder, Andreas Christian; Duffield, Emily Marie; Duflot, Laurent; Dührssen, Michael; Dumancic, Mirta; Dunford, Monica; Duran Yildiz, Hatice; Düren, Michael; Durglishvili, Archil; Duschinger, Dirk; Dutta, Baishali; Dyndal, Mateusz; Eckardt, Christoph; Ecker, Katharina Maria; Edgar, Ryan Christopher; Edwards, Nicholas Charles; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Ekelof, Tord; El Kacimi, Mohamed; Ellajosyula, Venugopal; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Elliot, Alison; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Enari, Yuji; Endner, Oliver Chris; Ennis, Joseph Stanford; Erdmann, Johannes; Ereditato, Antonio; Ernis, Gunar; Ernst, Jesse; Ernst, Michael; Errede, Steven; Ertel, Eugen; Escalier, Marc; Esch, Hendrik; Escobar, Carlos; Esposito, Bellisario; Etienvre, Anne-Isabelle; Etzion, Erez; Evans, Hal; Ezhilov, Alexey; Ezzi, Mohammed; Fabbri, Federica; Fabbri, Laura; Facini, Gabriel; Fakhrutdinov, Rinat; Falciano, Speranza; Falla, Rebecca Jane; Faltova, Jana; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farina, Christian; Farina, Edoardo Maria; Farooque, Trisha; Farrell, Steven; Farrington, Sinead; Farthouat, Philippe; Fassi, Farida; Fassnacht, Patrick; Fassouliotis, Dimitrios; Faucci Giannelli, Michele; Favareto, Andrea; Fawcett, William James; Fayard, Louis; Fedin, Oleg; Fedorko, Wojciech; Feigl, Simon; Feligioni, Lorenzo; Feng, Cunfeng; Feng, Eric; Feng, Haolu; Fenyuk, Alexander; Feremenga, Last; Fernandez Martinez, Patricia; Fernandez Perez, Sonia; Ferrando, James; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferreira de Lima, Danilo Enoque; Ferrer, Antonio; Ferrere, Didier; Ferretti, Claudio; Ferretto Parodi, Andrea; Fiedler, Frank; Filipčič, Andrej; Filipuzzi, Marco; Filthaut, Frank; Fincke-Keeler, Margret; Finelli, Kevin Daniel; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Adam; Fischer, Cora; Fischer, Julia; Fisher, Wade Cameron; Flaschel, Nils; Fleck, Ivor; Fleischmann, Philipp; Fletcher, Gareth Thomas; Fletcher, Rob Roy MacGregor; Flick, Tobias; Flores Castillo, Luis; Flowerdew, Michael; Forcolin, Giulio Tiziano; Formica, Andrea; Forti, Alessandra; Foster, Andrew Geoffrey; Fournier, Daniel; Fox, Harald; Fracchia, Silvia; Francavilla, Paolo; Franchini, Matteo; Francis, David; Franconi, Laura; Franklin, Melissa; Frate, Meghan; Fraternali, Marco; Freeborn, David; Fressard-Batraneanu, Silvia; Friedrich, Felix; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fusayasu, Takahiro; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gabrielli, Alessandro; Gabrielli, Andrea; Gach, Grzegorz; Gadatsch, Stefan; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Louis Guillaume; Gagnon, Pauline; Galea, Cristina; Galhardo, Bruno; Gallas, Elizabeth; Gallop, Bruce; Gallus, Petr; Galster, Gorm Aske Gram Krohn; Gan, KK; Gao, Jun; Gao, Yanyan; Gao, Yongsheng; Garay Walls, Francisca; García, Carmen; García Navarro, José Enrique; Garcia-Sciveres, Maurice; Gardner, Robert; Garelli, Nicoletta; Garonne, Vincent; Gascon Bravo, Alberto; Gasnikova, Ksenia; Gatti, Claudio; Gaudiello, Andrea; Gaudio, Gabriella; Gauthier, Lea; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gazis, Evangelos; Gecse, Zoltan; Gee, Norman; Geich-Gimbel, Christoph; Geisen, Marc; Geisler, Manuel Patrice; Gellerstedt, Karl; Gemme, Claudia; Genest, Marie-Hélène; Geng, Cong; Gentile, Simonetta; Gentsos, Christos; George, Simon; Gerbaudo, Davide; Gershon, Avi; Ghasemi, Sara; Ghneimat, Mazuza; Giacobbe, Benedetto; Giagu, Stefano; Giannetti, Paola; Gibbard, Bruce; Gibson, Stephen; Gignac, Matthew; Gilchriese, Murdock; Gillam, Thomas; Gillberg, Dag; Gilles, Geoffrey; Gingrich, Douglas; Giokaris, Nikos; Giordani, MarioPaolo; Giorgi, Filippo Maria; Giorgi, Francesco Michelangelo; Giraud, Pierre-Francois; Giromini, Paolo; Giugni, Danilo; Giuli, Francesco; Giuliani, Claudia; Giulini, Maddalena; Gjelsten, Børge Kile; Gkaitatzis, Stamatios; Gkialas, Ioannis; Gkougkousis, Evangelos Leonidas; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glaysher, Paul; Glazov, Alexandre; Goblirsch-Kolb, Maximilian; Godlewski, Jan; Goldfarb, Steven; Golling, Tobias; Golubkov, Dmitry; Gomes, Agostinho; Gonçalo, Ricardo; Goncalves Pinto Firmino Da Costa, Joao; Gonella, Giulia; Gonella, Laura; Gongadze, Alexi; González de la Hoz, Santiago; Gonzalez Parra, Garoe; Gonzalez-Sevilla, Sergio; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorelov, Igor; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Goshaw, Alfred; Gössling, Claus; Gostkin, Mikhail Ivanovitch; Goudet, Christophe Raymond; Goujdami, Driss; Goussiou, Anna; Govender, Nicolin; Gozani, Eitan; Graber, Lars; Grabowska-Bold, Iwona; Gradin, Per Olov Joakim; Grafström, Per; Gramling, Johanna; Gramstad, Eirik; Grancagnolo, Sergio; Gratchev, Vadim; Gravila, Paul Mircea; Gray, Heather; Graziani, Enrico; Greenwood, Zeno Dixon; Grefe, Christian; Gregersen, Kristian; Gregor, Ingrid-Maria; Grenier, Philippe; Grevtsov, Kirill; Griffiths, Justin; Grillo, Alexander; Grimm, Kathryn; Grinstein, Sebastian; Gris, Philippe Luc Yves; Grivaz, Jean-Francois; Groh, Sabrina; Grohs, Johannes Philipp; Gross, Eilam; Grosse-Knetter, Joern; Grossi, Giulio Cornelio; Grout, Zara Jane; Guan, Liang; Guan, Wen; Guenther, Jaroslav; Guescini, Francesco; Guest, Daniel; Gueta, Orel; Guido, Elisa; Guillemin, Thibault; Guindon, Stefan; Gul, Umar; Gumpert, Christian; Guo, Jun; Guo, Yicheng; Gupta, Ruchi; Gupta, Shaun; Gustavino, Giuliano; Gutierrez, Phillip; Gutierrez Ortiz, Nicolas Gilberto; Gutschow, Christian; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haber, Carl; Hadavand, Haleh Khani; Haddad, Nacim; Hadef, Asma; Hageböck, Stephan; Hagihara, Mutsuto; Hajduk, Zbigniew; Hakobyan, Hrachya; Haleem, Mahsana; Haley, Joseph; Halladjian, Garabed; Hallewell, Gregory David; Hamacher, Klaus; Hamal, Petr; Hamano, Kenji; Hamilton, Andrew; Hamity, Guillermo Nicolas; Hamnett, Phillip George; Han, Liang; Hanagaki, Kazunori; Hanawa, Keita; Hance, Michael; Haney, Bijan; Hanke, Paul; Hanna, Remie; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, Maike Christina; Hansen, Peter Henrik; Hara, Kazuhiko; Hard, Andrew; Harenberg, Torsten; Hariri, Faten; Harkusha, Siarhei; Harrington, Robert; Harrison, Paul Fraser; Hartjes, Fred; Hartmann, Nikolai Marcel; Hasegawa, Makoto; Hasegawa, Yoji; Hasib, A; Hassani, Samira; Haug, Sigve; Hauser, Reiner; Hauswald, Lorenz; Havranek, Miroslav; Hawkes, Christopher; Hawkings, Richard John; Hayakawa, Daiki; Hayden, Daniel; Hays, Chris; Hays, Jonathan Michael; Hayward, Helen; Haywood, Stephen; Head, Simon; Heck, Tobias; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heim, Timon; Heinemann, Beate; Heinrich, Jochen Jens; Heinrich, Lukas; Heinz, Christian; Hejbal, Jiri; Helary, Louis; Hellman, Sten; Helsens, Clement; Henderson, James; Henderson, Robert; Heng, Yang; Henkelmann, Steffen; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Herbert, Geoffrey Henry; Herde, Hannah; Herget, Verena; Hernández Jiménez, Yesenia; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hesketh, Gavin Grant; Hessey, Nigel; Hetherly, Jeffrey Wayne; Hickling, Robert; Higón-Rodriguez, Emilio; Hill, Ewan; Hill, John; Hiller, Karl Heinz; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hinman, Rachel Reisner; Hirose, Minoru; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoenig, Friedrich; Hohn, David; Holmes, Tova Ray; Homann, Michael; Honda, Takuya; Hong, Tae Min; Hooberman, Benjamin Henry; Hopkins, Walter; Horii, Yasuyuki; Horton, Arthur James; Hostachy, Jean-Yves; Hou, Suen; Hoummada, Abdeslam; Howarth, James; Hoya, Joaquin; Hrabovsky, Miroslav; Hristova, Ivana; Hrivnac, Julius; Hryn'ova, Tetiana; Hrynevich, Aliaksei; Hsu, Catherine; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Hu, Qipeng; Hu, Shuyang; Huang, Yanping; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Huhtinen, Mika; Huo, Peng; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibragimov, Iskander; Iconomidou-Fayard, Lydia; Ideal, Emma; Idrissi, Zineb; Iengo, Paolo; Igonkina, Olga; Iizawa, Tomoya; Ikegami, Yoichi; Ikeno, Masahiro; Ilchenko, Yuriy; Iliadis, Dimitrios; Ilic, Nikolina; Ince, Tayfun; Introzzi, Gianluca; Ioannou, Pavlos; Iodice, Mauro; Iordanidou, Kalliopi; Ippolito, Valerio; Ishijima, Naoki; Ishino, Masaya; Ishitsuka, Masaki; Ishmukhametov, Renat; Issever, Cigdem; Istin, Serhat; Ito, Fumiaki; Iturbe Ponce, Julia Mariana; Iuppa, Roberto; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jabbar, Samina; Jackson, Brett; Jackson, Paul; Jain, Vivek; Jakobi, Katharina Bianca; Jakobs, Karl; Jakobsen, Sune; Jakoubek, Tomas; Jamin, David Olivier; Jana, Dilip; Jansky, Roland; Janssen, Jens; Janus, Michel; Jarlskog, Göran; Javadov, Namig; Javůrek, Tomáš; Jeanneau, Fabien; Jeanty, Laura; Jeng, Geng-yuan; Jennens, David; Jenni, Peter; Jeske, Carl; Jézéquel, Stéphane; Ji, Haoshuang; Jia, Jiangyong; Jiang, Hai; Jiang, Yi; Jiggins, Stephen; Jimenez Pena, Javier; Jin, Shan; Jinaru, Adam; Jinnouchi, Osamu; Jivan, Harshna; Johansson, Per; Johns, Kenneth; Johnson, William Joseph; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Sarah; Jones, Tim; Jongmanns, Jan; Jorge, Pedro; Jovicevic, Jelena; Ju, Xiangyang; Juste Rozas, Aurelio; Köhler, Markus Konrad; Kaczmarska, Anna; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kahn, Sebastien Jonathan; Kaji, Toshiaki; Kajomovitz, Enrique; Kalderon, Charles William; Kaluza, Adam; Kama, Sami; Kamenshchikov, Andrey; Kanaya, Naoko; Kaneti, Steven; Kanjir, Luka; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kaplan, Laser Seymour; Kapliy, Anton; Kar, Deepak; Karakostas, Konstantinos; Karamaoun, Andrew; Karastathis, Nikolaos; Kareem, Mohammad Jawad; Karentzos, Efstathios; Karnevskiy, Mikhail; Karpov, Sergey; Karpova, Zoya; Karthik, Krishnaiyengar; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kasahara, Kota; Kashif, Lashkar; Kass, Richard; Kastanas, Alex; Kataoka, Yousuke; Kato, Chikuma; Katre, Akshay; Katzy, Judith; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kazanin, Vassili; Keeler, Richard; Kehoe, Robert; Keller, John; Kempster, Jacob Julian; Kentaro, Kawade; Keoshkerian, Houry; Kepka, Oldrich; Kerševan, Borut Paul; Kersten, Susanne; Keyes, Robert; Khader, Mazin; Khalil-zada, Farkhad; Khanov, Alexander; Kharlamov, Alexey; Kharlamova, Tatyana; Khoo, Teng Jian; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kido, Shogo; Kilby, Callum; Kim, Hee Yeun; Kim, Shinhong; Kim, Young-Kee; Kimura, Naoki; Kind, Oliver Maria; King, Barry; King, Matthew; Kirk, Julie; Kiryunin, Andrey; Kishimoto, Tomoe; Kisielewska, Danuta; Kiss, Florian; Kiuchi, Kenji; Kivernyk, Oleh; Kladiva, Eduard; Klein, Matthew Henry; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klimek, Pawel; Klimentov, Alexei; Klingenberg, Reiner; Klinger, Joel Alexander; Klioutchnikova, Tatiana; Kluge, Eike-Erik; Kluit, Peter; Kluth, Stefan; Knapik, Joanna; Kneringer, Emmerich; Knoops, Edith; Knue, Andrea; Kobayashi, Aine; Kobayashi, Dai; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kodys, Peter; Koehler, Nicolas Maximilian; Koffas, Thomas; Koffeman, Els; Koi, Tatsumi; Kolanoski, Hermann; Kolb, Mathis; Koletsou, Iro; Komar, Aston; Komori, Yuto; Kondo, Takahiko; Kondrashova, Nataliia; Köneke, Karsten; König, Adriaan; Kono, Takanori; Konoplich, Rostislav; Konstantinidis, Nikolaos; Kopeliansky, Revital; Koperny, Stefan; Köpke, Lutz; Kopp, Anna Katharina; Korcyl, Krzysztof; Kordas, Kostantinos; Korn, Andreas; Korol, Aleksandr; Korolkov, Ilya; Korolkova, Elena; Kortner, Oliver; Kortner, Sandra; Kosek, Tomas; Kostyukhin, Vadim; Kotwal, Ashutosh; Kourkoumeli-Charalampidi, Athina; Kourkoumelis, Christine; Kouskoura, Vasiliki; Kowalewska, Anna Bozena; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozakai, Chihiro; Kozanecki, Witold; Kozhin, Anatoly; Kramarenko, Viktor; Kramberger, Gregor; Krasnopevtsev, Dimitriy; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kravchenko, Anton; Kretz, Moritz; Kretzschmar, Jan; Kreutzfeldt, Kristof; Krieger, Peter; Krizka, Karol; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Krumnack, Nils; Kruse, Mark; Kruskal, Michael; Kubota, Takashi; Kucuk, Hilal; Kuday, Sinan; Kuechler, Jan Thomas; Kuehn, Susanne; Kugel, Andreas; Kuger, Fabian; Kuhl, Andrew; Kuhl, Thorsten; Kukhtin, Victor; Kukla, Romain; Kulchitsky, Yuri; Kuleshov, Sergey; Kuna, Marine; Kunigo, Takuto; Kupco, Alexander; Kurashige, Hisaya; Kurochkin, Yurii; Kus, Vlastimil; Kuwertz, Emma Sian; Kuze, Masahiro; Kvita, Jiri; Kwan, Tony; Kyriazopoulos, Dimitrios; La Rosa, Alessandro; La Rosa Navarro, Jose Luis; La Rotonda, Laura; Lacasta, Carlos; Lacava, Francesco; Lacey, James; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Remi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Lammers, Sabine; Lampl, Walter; Lançon, Eric; Landgraf, Ulrich; Landon, Murrough; Lanfermann, Marie Christine; Lang, Valerie Susanne; Lange, J örn Christian; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Lanza, Agostino; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Lasagni Manghi, Federico; Lassnig, Mario; Laurelli, Paolo; Lavrijsen, Wim; Law, Alexander; Laycock, Paul; Lazovich, Tomo; Lazzaroni, Massimo; Le, Brian; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Quilleuc, Eloi; LeBlanc, Matthew Edgar; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Claire Alexandra; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Benoit; Lefebvre, Guillaume; Lefebvre, Michel; Legger, Federica; Leggett, Charles; Lehan, Allan; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leight, William Axel; Leisos, Antonios; Leister, Andrew Gerard; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Lemmer, Boris; Leney, Katharine; Lenz, Tatjana; Lenzi, Bruno; Leone, Robert; Leone, Sandra; Leonidopoulos, Christos; Leontsinis, Stefanos; Lerner, Giuseppe; Leroy, Claude; Lesage, Arthur; Lester, Christopher; Levchenko, Mikhail; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Levy, Mark; Lewis, Dave; Leyko, Agnieszka; Leyton, Michael; Li, Bing; Li, Changqiao; Li, Haifeng; Li, Ho Ling; Li, Lei; Li, Liang; Li, Qi; Li, Shu; Li, Xingguo; Li, Yichen; Liang, Zhijun; Liberti, Barbara; Liblong, Aaron; Lichard, Peter; Lie, Ki; Liebal, Jessica; Liebig, Wolfgang; Limosani, Antonio; Lin, Simon; Lin, Tai-Hua; Lindquist, Brian Edward; Lionti, Anthony Eric; Lipeles, Elliot; Lipniacka, Anna; Lisovyi, Mykhailo; Liss, Tony; Lister, Alison; Litke, Alan; Liu, Bo; Liu, Dong; Liu, Hao; Liu, Hongbin; Liu, Jian; Liu, Jianbei; Liu, Kun; Liu, Lulu; Liu, Miaoyuan; Liu, Minghui; Liu, Yanlin; Liu, Yanwen; Livan, Michele; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lo Sterzo, Francesco; Lobodzinska, Ewelina Maria; Loch, Peter; Lockman, William; Loebinger, Fred; Loevschall-Jensen, Ask Emil; Loew, Kevin Michael; Loginov, Andrey; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Long, Brian Alexander; Long, Jonathan David; Long, Robin Eamonn; Longo, Luigi; Looper, Kristina Anne; López, Jorge Andrés; Lopez Mateos, David; Lopez Paredes, Brais; Lopez Paz, Ivan; Lopez Solis, Alvaro; Lorenz, Jeanette; Lorenzo Martinez, Narei; Losada, Marta; Lösel, Philipp Jonathan; Lou, XinChou; Lounis, Abdenour; Love, Jeremy; Love, Peter; Lu, Haonan; Lu, Nan; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Luedtke, Christian; Luehring, Frederick; Lukas, Wolfgang; Luminari, Lamberto; Lundberg, Olof; Lund-Jensen, Bengt; Luzi, Pierre Marc; Lynn, David; Lysak, Roman; Lytken, Else; Lyubushkin, Vladimir; Ma, Hong; Ma, Lian Liang; Ma, Yanhui; Maccarrone, Giovanni; Macchiolo, Anna; Macdonald, Calum Michael; Maček, Boštjan; Machado Miguens, Joana; Madaffari, Daniele; Madar, Romain; Maddocks, Harvey Jonathan; Mader, Wolfgang; Madsen, Alexander; Maeda, Junpei; Maeland, Steffen; Maeno, Tadashi; Maevskiy, Artem; Magradze, Erekle; Mahlstedt, Joern; Maiani, Camilla; Maidantchik, Carmen; Maier, Andreas Alexander; Maier, Thomas; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Malaescu, Bogdan; Malecki, Pawel; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Malone, Caitlin; Malone, Claire; Maltezos, Stavros; Malyukov, Sergei; Mamuzic, Judita; Mancini, Giada; Mandelli, Luciano; Mandić, Igor; Maneira, José; Manhaes de Andrade Filho, Luciano; Manjarres Ramos, Joany; Mann, Alexander; Manousos, Athanasios; Mansoulie, Bruno; Mansour, Jason Dhia; Mantifel, Rodger; Mantoani, Matteo; Manzoni, Stefano; Mapelli, Livio; Marceca, Gino; March, Luis; Marchiori, Giovanni; Marcisovsky, Michal; Marjanovic, Marija; Marley, Daniel; Marroquim, Fernando; Marsden, Stephen Philip; Marshall, Zach; Marti-Garcia, Salvador; Martin, Brian Thomas; Martin, Tim; Martin, Victoria Jane; Martin dit Latour, Bertrand; Martinez, Mario; Martinez Outschoorn, Verena; Martin-Haugh, Stewart; Martoiu, Victor Sorin; Martyniuk, Alex; Marx, Marilyn; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Massa, Ignazio; Massa, Lorenzo; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Mättig, Peter; Mattmann, Johannes; Maurer, Julien; Maxfield, Stephen; Maximov, Dmitriy; Mazini, Rachid; Mazza, Simone Michele; Mc Fadden, Neil Christopher; Mc Goldrick, Garrin; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCarthy, Tom; McClymont, Laurie; McDonald, Emily; Mcfayden, Josh; Mchedlidze, Gvantsa; McMahon, Steve; McPherson, Robert; Medinnis, Michael; Meehan, Samuel; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meineck, Christian; Meirose, Bernhard; Melini, Davide; Mellado Garcia, Bruce Rafael; Melo, Matej; Meloni, Federico; Mengarelli, Alberto; Menke, Sven; Meoni, Evelin; Mergelmeyer, Sebastian; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meyer, Carsten; Meyer, Christopher; Meyer, Jean-Pierre; Meyer, Jochen; Meyer Zu Theenhausen, Hanno; Miano, Fabrizio; Middleton, Robin; Miglioranzi, Silvia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Milesi, Marco; Milic, Adriana; Miller, David; Mills, Corrinne; Milov, Alexander; Milstead, David; Minaenko, Andrey; Minami, Yuto; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Minegishi, Yuji; Ming, Yao; Mir, Lluisa-Maria; Mistry, Khilesh; Mitani, Takashi; Mitrevski, Jovan; Mitsou, Vasiliki A; Miucci, Antonio; Miyagawa, Paul; Mjörnmark, Jan-Ulf; Mlynarikova, Michaela; Moa, Torbjoern; Mochizuki, Kazuya; Mohapatra, Soumya; Molander, Simon; Moles-Valls, Regina; Monden, Ryutaro; Mondragon, Matthew Craig; Mönig, Klaus; Monk, James; Monnier, Emmanuel; Montalbano, Alyssa; Montejo Berlingen, Javier; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Morange, Nicolas; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morgenstern, Stefanie; Mori, Daniel; Mori, Tatsuya; Morii, Masahiro; Morinaga, Masahiro; Morisbak, Vanja; Moritz, Sebastian; Morley, Anthony Keith; Mornacchi, Giuseppe; Morris, John; Mortensen, Simon Stark; Morvaj, Ljiljana; Mosidze, Maia; Moss, Josh; Motohashi, Kazuki; Mount, Richard; Mountricha, Eleni; Moyse, Edward; Muanza, Steve; Mudd, Richard; Mueller, Felix; Mueller, James; Mueller, Ralph Soeren Peter; Mueller, Thibaut; Muenstermann, Daniel; Mullen, Paul; Mullier, Geoffrey; Munoz Sanchez, Francisca Javiela; Murillo Quijada, Javier Alberto; Murray, Bill; Musheghyan, Haykuhi; Muškinja, Miha; Myagkov, Alexey; Myska, Miroslav; Nachman, Benjamin Philip; Nackenhorst, Olaf; Nagai, Koichi; Nagai, Ryo; Nagano, Kunihiro; Nagasaka, Yasushi; Nagata, Kazuki; Nagel, Martin; Nagy, Elemer; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakamura, Tomoaki; Nakano, Itsuo; Namasivayam, Harisankar; Naranjo Garcia, Roger Felipe; Narayan, Rohin; Narrias Villar, Daniel Isaac; Naryshkin, Iouri; Naumann, Thomas; Navarro, Gabriela; Nayyar, Ruchika; Neal, Homer; Nechaeva, Polina; Neep, Thomas James; Negri, Andrea; Negrini, Matteo; Nektarijevic, Snezana; Nellist, Clara; Nelson, Andrew; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Neubauer, Mark; Neumann, Manuel; Neves, Ricardo; Nevski, Pavel; Newman, Paul; Nguyen, Duong Hai; Nguyen Manh, Tuan; Nickerson, Richard; Nicolaidou, Rosy; Nielsen, Jason; Nikiforov, Andriy; Nikolaenko, Vladimir; Nikolic-Audit, Irena; Nikolopoulos, Konstantinos; Nilsen, Jon Kerr; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nisius, Richard; Nobe, Takuya; Nomachi, Masaharu; Nomidis, Ioannis; Nooney, Tamsin; Norberg, Scarlet; Nordberg, Markus; Norjoharuddeen, Nurfikri; Novgorodova, Olga; Nowak, Sebastian; Nozaki, Mitsuaki; Nozka, Libor; Ntekas, Konstantinos; Nurse, Emily; Nuti, Francesco; O'grady, Fionnbarr; O'Neil, Dugan; O'Rourke, Abigail Alexandra; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Obermann, Theresa; Ocariz, Jose; Ochi, Atsuhiko; Ochoa, Ines; Ochoa-Ricoux, Juan Pedro; Oda, Susumu; Odaka, Shigeru; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohman, Henrik; Oide, Hideyuki; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olariu, Albert; Oleiro Seabra, Luis Filipe; Olivares Pino, Sebastian Andres; Oliveira Damazio, Denis; Olszewski, Andrzej; Olszowska, Jolanta; Onofre, António; Onogi, Kouta; Onyisi, Peter; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlando, Nicola; Orr, Robert; Osculati, Bianca; Ospanov, Rustem; Otero y Garzon, Gustavo; Otono, Hidetoshi; Ouchrif, Mohamed; Ould-Saada, Farid; Ouraou, Ahmimed; Oussoren, Koen Pieter; Ouyang, Qun; Owen, Mark; Owen, Rhys Edward; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pachal, Katherine; Pacheco Pages, Andres; Pacheco Rodriguez, Laura; Padilla Aranda, Cristobal; Pagáčová, Martina; Pagan Griso, Simone; Paganini, Michela; Paige, Frank; Pais, Preema; Pajchel, Katarina; Palacino, Gabriel; Palazzo, Serena; Palestini, Sandro; Palka, Marek; Pallin, Dominique; Panagiotopoulou, Evgenia; Pandini, Carlo Enrico; Panduro Vazquez, William; Pani, Priscilla; Panitkin, Sergey; Pantea, Dan; Paolozzi, Lorenzo; Papadopoulou, Theodora; Papageorgiou, Konstantinos; Paramonov, Alexander; Paredes Hernandez, Daniela; Parker, Adam Jackson; Parker, Michael Andrew; Parker, Kerry Ann; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pascuzzi, Vincent; Pasqualucci, Enrico; Passaggio, Stefano; Pastore, Francesca; Pásztor, Gabriella; Pataraia, Sophio; Pater, Joleen; Pauly, Thilo; Pearce, James; Pearson, Benjamin; Pedersen, Lars Egholm; Pedersen, Maiken; Pedraza Lopez, Sebastian; Pedro, Rute; Peleganchuk, Sergey; Penc, Ondrej; Peng, Cong; Peng, Haiping; Penwell, John; Peralva, Bernardo; Perego, Marta Maria; Perepelitsa, Dennis; Perez Codina, Estel; Perini, Laura; Pernegger, Heinz; Perrella, Sabrina; Peschke, Richard; Peshekhonov, Vladimir; Peters, Krisztian; Peters, Yvonne; Petersen, Brian; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petroff, Pierre; Petrolo, Emilio; Petrov, Mariyan; Petrucci, Fabrizio; Pettersson, Nora Emilia; Peyaud, Alan; Pezoa, Raquel; Phillips, Peter William; Piacquadio, Giacinto; Pianori, Elisabetta; Picazio, Attilio; Piccaro, Elisa; Piccinini, Maurizio; Pickering, Mark Andrew; Piegaia, Ricardo; Pilcher, James; Pilkington, Andrew; Pin, Arnaud Willy J; Pinamonti, Michele; Pinfold, James; Pingel, Almut; Pires, Sylvestre; Pirumov, Hayk; Pitt, Michael; Plazak, Lukas; Pleier, Marc-Andre; Pleskot, Vojtech; Plotnikova, Elena; Plucinski, Pawel; Pluth, Daniel; Poettgen, Ruth; Poggioli, Luc; Pohl, David-leon; Polesello, Giacomo; Poley, Anne-luise; Policicchio, Antonio; Polifka, Richard; Polini, Alessandro; Pollard, Christopher Samuel; Polychronakos, Venetios; Pommès, Kathy; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Poppleton, Alan; Pospisil, Stanislav; Potamianos, Karolos; Potrap, Igor; Potter, Christina; Potter, Christopher; Poulard, Gilbert; Poveda, Joaquin; Pozdnyakov, Valery; Pozo Astigarraga, Mikel Eukeni; Pralavorio, Pascal; Pranko, Aliaksandr; Prell, Soeren; Price, Darren; Price, Lawrence; Primavera, Margherita; Prince, Sebastien; Prokofiev, Kirill; Prokoshin, Fedor; Protopopescu, Serban; Proudfoot, James; Przybycien, Mariusz; Puddu, Daniele; Purohit, Milind; Puzo, Patrick; Qian, Jianming; Qin, Gang; Qin, Yang; Quadt, Arnulf; Quayle, William; Queitsch-Maitland, Michaela; Quilty, Donnchadha; Raddum, Silje; Radeka, Veljko; Radescu, Voica; Radhakrishnan, Sooraj Krishnan; Radloff, Peter; Rados, Pere; Ragusa, Francesco; Rahal, Ghita; Raine, John Andrew; Rajagopalan, Srinivasan; Rammensee, Michael; Rangel-Smith, Camila; Ratti, Maria Giulia; Rauscher, Felix; Rave, Stefan; Ravenscroft, Thomas; Ravinovich, Ilia; Raymond, Michel; Read, Alexander Lincoln; Readioff, Nathan Peter; Reale, Marilea; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reed, Robert; Reeves, Kendall; Rehnisch, Laura; Reichert, Joseph; Reiss, Andreas; Rembser, Christoph; Ren, Huan; Rescigno, Marco; Resconi, Silvia; Rezanova, Olga; Reznicek, Pavel; Rezvani, Reyhaneh; Richter, Robert; Richter, Stefan; Richter-Was, Elzbieta; Ricken, Oliver; Ridel, Melissa; Rieck, Patrick; Riegel, Christian Johann; Rieger, Julia; Rifki, Othmane; Rijssenbeek, Michael; Rimoldi, Adele; Rimoldi, Marco; Rinaldi, Lorenzo; Ristić, Branislav; Ritsch, Elmar; Riu, Imma; Rizatdinova, Flera; Rizvi, Eram; Rizzi, Chiara; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robson, Aidan; Roda, Chiara; Rodina, Yulia; Rodriguez Perez, Andrea; Rodriguez Rodriguez, Daniel; Roe, Shaun; Rogan, Christopher Sean; Røhne, Ole; Romaniouk, Anatoli; Romano, Marino; Romano Saez, Silvestre Marino; Romero Adam, Elena; Rompotis, Nikolaos; Ronzani, Manfredi; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosbach, Kilian; Rose, Peyton; Rosien, Nils-Arne; Rossetti, Valerio; Rossi, Elvira; Rossi, Leonardo Paolo; Rosten, Jonatan; Rosten, Rachel; Rotaru, Marina; Roth, Itamar; Rothberg, Joseph; Rousseau, David; Rozanov, Alexandre; Rozen, Yoram; Ruan, Xifeng; Rubbo, Francesco; Rudolph, Matthew Scott; Rühr, Frederik; Ruiz-Martinez, Aranzazu; Rurikova, Zuzana; Rusakovich, Nikolai; Ruschke, Alexander; Russell, Heather; Rutherfoord, John; Ruthmann, Nils; Ryabov, Yury; Rybar, Martin; Rybkin, Grigori; Ryu, Soo; Ryzhov, Andrey; Rzehorz, Gerhard Ferdinand; Saavedra, Aldo; Sabato, Gabriele; Sacerdoti, Sabrina; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Saha, Puja; Sahinsoy, Merve; Saimpert, Matthias; Saito, Tomoyuki; Sakamoto, Hiroshi; Sakurai, Yuki; Salamanna, Giuseppe; Salamon, Andrea; Salazar Loyola, Javier Esteban; Salek, David; Sales De Bruin, Pedro Henrique; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sammel, Dirk; Sampsonidis, Dimitrios; Sanchez, Arturo; Sánchez, Javier; Sanchez Martinez, Victoria; Sandaker, Heidi; Sandbach, Ruth Laura; Sander, Heinz Georg; Sandhoff, Marisa; Sandoval, Carlos; Sankey, Dave; Sannino, Mario; Sansoni, Andrea; Santoni, Claudio; Santonico, Rinaldo; Santos, Helena; Santoyo Castillo, Itzebelt; Sapp, Kevin; Sapronov, Andrey; Saraiva, João; Sarrazin, Bjorn; Sasaki, Osamu; Sato, Koji; Sauvan, Emmanuel; Savage, Graham; Savard, Pierre; Savic, Natascha; Sawyer, Craig; Sawyer, Lee; Saxon, James; Sbarra, Carla; Sbrizzi, Antonio; Scanlon, Tim; Scannicchio, Diana; Scarcella, Mark; Scarfone, Valerio; Schaarschmidt, Jana; Schacht, Peter; Schachtner, Balthasar Maria; Schaefer, Douglas; Schaefer, Leigh; Schaefer, Ralph; Schaeffer, Jan; Schaepe, Steffen; Schaetzel, Sebastian; Schäfer, Uli; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R Dean; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Schiavi, Carlo; Schier, Sheena; Schillo, Christian; Schioppa, Marco; Schlenker, Stefan; Schmidt-Sommerfeld, Korbinian Ralf; Schmieden, Kristof; Schmitt, Christian; Schmitt, Stefan; Schmitz, Simon; Schneider, Basil; Schnoor, Ulrike; Schoeffel, Laurent; Schoening, Andre; Schoenrock, Bradley Daniel; Schopf, Elisabeth; Schott, Matthias; Schovancova, Jaroslava; Schramm, Steven; Schreyer, Manuel; Schuh, Natascha; Schulte, Alexandra; Schultens, Martin Johannes; Schultz-Coulon, Hans-Christian; Schulz, Holger; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwartzman, Ariel; Schwarz, Thomas Andrew; Schweiger, Hansdieter; Schwemling, Philippe; Schwienhorst, Reinhard; Schwindling, Jerome; Schwindt, Thomas; Sciolla, Gabriella; Scuri, Fabrizio; Scutti, Federico; Searcy, Jacob; Seema, Pienpen; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Sekhon, Karishma; Sekula, Stephen; Seliverstov, Dmitry; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Serkin, Leonid; Sessa, Marco; Seuster, Rolf; Severini, Horst; Sfiligoj, Tina; Sforza, Federico; Sfyrla, Anna; Shabalina, Elizaveta; Shaikh, Nabila Wahab; Shan, Lianyou; Shang, Ruo-yu; Shank, James; Shapiro, Marjorie; Shatalov, Pavel; Shaw, Kate; Shaw, Savanna Marie; Shcherbakova, Anna; Shehu, Ciwake Yusufu; Sherwood, Peter; Shi, Liaoshan; Shimizu, Shima; Shimmin, Chase Owen; Shimojima, Makoto; Shirabe, Shohei; Shiyakova, Mariya; Shmeleva, Alevtina; Shoaleh Saadi, Diane; Shochet, Mel; Shojaii, Seyed Ruhollah; Shope, David Richard; Shrestha, Suyog; Shulga, Evgeny; Shupe, Michael; Sicho, Petr; Sickles, Anne Marie; Sidebo, Per Edvin; Sidiropoulou, Ourania; Sidorov, Dmitri; Sidoti, Antonio; Siegert, Frank; Sijacki, Djordje; Silva, José; Silverstein, Samuel; Simak, Vladislav; Simic, Ljiljana; Simion, Stefan; Simioni, Eduard; Simmons, Brinick; Simon, Dorian; Simon, Manuel; Sinervo, Pekka; Sinev, Nikolai; Sioli, Maximiliano; Siragusa, Giovanni; Sivoklokov, Serguei; Sjölin, Jörgen; Skinner, Malcolm Bruce; Skottowe, Hugh Philip; Skubic, Patrick; Slater, Mark; Slavicek, Tomas; Slawinska, Magdalena; Sliwa, Krzysztof; Slovak, Radim; Smakhtin, Vladimir; Smart, Ben; Smestad, Lillian; Smiesko, Juraj; Smirnov, Sergei; Smirnov, Yury; Smirnova, Lidia; Smirnova, Oxana; Smith, Matthew; Smith, Russell; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snyder, Ian Michael; Snyder, Scott; Sobie, Randall; Socher, Felix; Soffer, Abner; Soh, Dart-yin; Sokhrannyi, Grygorii; Solans Sanchez, Carlos; Solar, Michael; Soldatov, Evgeny; Soldevila, Urmila; Solodkov, Alexander; Soloshenko, Alexei; Solovyanov, Oleg; Solovyev, Victor; Sommer, Philip; Son, Hyungsuk; Song, Hong Ye; Sood, Alexander; Sopczak, Andre; Sopko, Vit; Sorin, Veronica; Sosa, David; Sotiropoulou, Calliope Louisa; Soualah, Rachik; Soukharev, Andrey; South, David; Sowden, Benjamin; Spagnolo, Stefania; Spalla, Margherita; Spangenberg, Martin; Spanò, Francesco; Sperlich, Dennis; Spettel, Fabian; Spighi, Roberto; Spigo, Giancarlo; Spiller, Laurence Anthony; Spousta, Martin; St Denis, Richard Dante; Stabile, Alberto; Stamen, Rainer; Stamm, Soren; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stanescu-Bellu, Madalina; Stanitzki, Marcel Michael; Stapnes, Steinar; Starchenko, Evgeny; Stark, Giordon; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Stärz, Steffen; Staszewski, Rafal; Steinberg, Peter; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stewart, Graeme; Stillings, Jan Andre; Stockton, Mark; Stoebe, Michael; Stoicea, Gabriel; Stolte, Philipp; Stonjek, Stefan; Stradling, Alden; Straessner, Arno; Stramaglia, Maria Elena; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Stroynowski, Ryszard; Strubig, Antonia; Stucci, Stefania Antonia; Stugu, Bjarne; Styles, Nicholas Adam; Su, Dong; Su, Jun; Suchek, Stanislav; Sugaya, Yorihito; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Siyuan; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Susinno, Giancarlo; Sutton, Mark; Suzuki, Shota; Svatos, Michal; Swiatlowski, Maximilian; Sykora, Ivan; Sykora, Tomas; Ta, Duc; Taccini, Cecilia; Tackmann, Kerstin; Taenzer, Joe; Taffard, Anyes; Tafirout, Reda; Taiblum, Nimrod; Takai, Helio; Takashima, Ryuichi; Takeshita, Tohru; Takubo, Yosuke; Talby, Mossadek; Talyshev, Alexey; Tan, Kong Guan; Tanaka, Junichi; Tanaka, Masahiro; Tanaka, Reisaburo; Tanaka, Shuji; Tanioka, Ryo; Tannenwald, Benjamin Bordy; Tapia Araya, Sebastian; Tapprogge, Stefan; Tarem, Shlomit; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tashiro, Takuya; Tassi, Enrico; Tavares Delgado, Ademar; Tayalati, Yahya; Taylor, Aaron; Taylor, Geoffrey; Taylor, Pierre Thor Elliot; Taylor, Wendy; Teischinger, Florian Alfred; Teixeira-Dias, Pedro; Temming, Kim Katrin; Temple, Darren; Ten Kate, Herman; Teng, Ping-Kun; Teoh, Jia Jian; Tepel, Fabian-Phillipp; Terada, Susumu; Terashi, Koji; Terron, Juan; Terzo, Stefano; Testa, Marianna; Teuscher, Richard; Theveneaux-Pelzer, Timothée; Thomas, Juergen; Thomas-Wilsker, Joshuha; Thompson, Emily; Thompson, Paul; Thompson, Stan; Thomsen, Lotte Ansgaard; Thomson, Evelyn; Thomson, Mark; Tibbetts, Mark James; Ticse Torres, Royer Edson; Tikhomirov, Vladimir; Tikhonov, Yury; Timoshenko, Sergey; Tipton, Paul; Tisserant, Sylvain; Todome, Kazuki; Todorov, Theodore; Todorova-Nova, Sharka; Tojo, Junji; Tokár, Stanislav; Tokushuku, Katsuo; Tolley, Emma; Tomlinson, Lee; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tong, Baojia(Tony); Tornambe, Peter; Torrence, Eric; Torres, Heberth; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Trefzger, Thomas; Tricoli, Alessandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Tripiana, Martin; Trischuk, William; Trocmé, Benjamin; Trofymov, Artur; Troncon, Clara; Trottier-McDonald, Michel; Trovatelli, Monica; Truong, Loan; Trzebinski, Maciej; Trzupek, Adam; Tseng, Jeffrey; Tsiareshka, Pavel; Tsipolitis, Georgios; Tsirintanis, Nikolaos; Tsiskaridze, Shota; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsui, Ka Ming; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsuno, Soshi; Tsybychev, Dmitri; Tu, Yanjun; Tudorache, Alexandra; Tudorache, Valentina; Tuna, Alexander Naip; Tupputi, Salvatore; Turchikhin, Semen; Turecek, Daniel; Turgeman, Daniel; Turra, Ruggero; Tuts, Michael; Tyndel, Mike; Ucchielli, Giulia; Ueda, Ikuo; Ughetto, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Undrus, Alexander; Unel, Gokhan; Ungaro, Francesca; Unno, Yoshinobu; Unverdorben, Christopher; Urban, Jozef; Urquijo, Phillip; Urrejola, Pedro; Usai, Giulio; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Valderanis, Chrysostomos; Valdes Santurio, Eduardo; Valencic, Nika; Valentinetti, Sara; Valero, Alberto; Valery, Loic; Valkar, Stefan; Valls Ferrer, Juan Antonio; Van Den Wollenberg, Wouter; Van Der Deijl, Pieter; van der Graaf, Harry; van Eldik, Niels; van Gemmeren, Peter; Van Nieuwkoop, Jacobus; van Vulpen, Ivo; van Woerden, Marius Cornelis; Vanadia, Marco; Vandelli, Wainer; Vanguri, Rami; Vaniachine, Alexandre; Vankov, Peter; Vardanyan, Gagik; Vari, Riccardo; Varnes, Erich; Varol, Tulin; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vasquez, Jared Gregory; Vasquez, Gerardo; Vazeille, Francois; Vazquez Schroeder, Tamara; Veatch, Jason; Veeraraghavan, Venkatesh; Veloce, Laurelle Maria; Veloso, Filipe; Veneziano, Stefano; Ventura, Andrea; Venturi, Manuela; Venturi, Nicola; Venturini, Alessio; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vest, Anja; Vetterli, Michel; Viazlo, Oleksandr; Vichou, Irene; Vickey, Trevor; Vickey Boeriu, Oana Elena; Viehhauser, Georg; Viel, Simon; Vigani, Luigi; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinogradov, Vladimir; Vittori, Camilla; Vivarelli, Iacopo; Vlachos, Sotirios; Vlasak, Michal; Vogel, Marcelo; Vokac, Petr; Volpi, Guido; Volpi, Matteo; von der Schmitt, Hans; von Toerne, Eckhard; Vorobel, Vit; Vorobev, Konstantin; Vos, Marcel; Voss, Rudiger; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vuillermet, Raphael; Vukotic, Ilija; Vykydal, Zdenek; Wagner, Peter; Wagner, Wolfgang; Wahlberg, Hernan; Wahrmund, Sebastian; Wakabayashi, Jun; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wallangen, Veronica; Wang, Chao; Wang, Chao; Wang, Fuquan; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Kuhan; Wang, Rui; Wang, Song-Ming; Wang, Tan; Wang, Tingting; Wang, Wenxiao; Wang, Xiaoxiao; Wanotayaroj, Chaowaroj; Warburton, Andreas; Ward, Patricia; Wardrope, David Robert; Washbrook, Andrew; Watkins, Peter; Watson, Alan; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Ben; Webb, Samuel; Weber, Michele; Weber, Stefan Wolf; Weber, Stephen; Webster, Jordan S; Weidberg, Anthony; Weinert, Benjamin; Weingarten, Jens; Weiser, Christian; Weits, Hartger; Wells, Phillippa; Wenaus, Torre; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Matthias; Werner, Michael David; Werner, Per; Wessels, Martin; Wetter, Jeffrey; Whalen, Kathleen; Whallon, Nikola Lazar; Wharton, Andrew Mark; White, Andrew; White, Martin; White, Ryan; Whiteson, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wiglesworth, Craig; Wiik-Fuchs, Liv Antje Mari; Wildauer, Andreas; Wilk, Fabian; Wilkens, Henric George; Williams, Hugh; Williams, Sarah; Willis, Christopher; Willocq, Stephane; Wilson, John; Wingerter-Seez, Isabelle; Winklmeier, Frank; Winston, Oliver James; Winter, Benedict Tobias; Wittgen, Matthias; Wittkowski, Josephine; Wolf, Tim Michael Heinz; Wolter, Marcin Wladyslaw; Wolters, Helmut; Worm, Steven D; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wozniak, Krzysztof; Wu, Mengqing; Wu, Miles; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wyatt, Terry Richard; Wynne, Benjamin; Xella, Stefania; Xu, Da; Xu, Lailin; Yabsley, Bruce; Yacoob, Sahal; Yamaguchi, Daiki; Yamaguchi, Yohei; Yamamoto, Akira; Yamamoto, Shimpei; Yamanaka, Takashi; Yamauchi, Katsuya; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Hongtao; Yang, Yi; Yang, Zongchang; Yao, Weiming; Yap, Yee Chinn; Yasu, Yoshiji; Yatsenko, Elena; Yau Wong, Kaven Henry; Ye, Jingbo; Ye, Shuwei; Yeletskikh, Ivan; Yen, Andy L; Yildirim, Eda; Yorita, Kohei; Yoshida, Rikutaro; Yoshihara, Keisuke; Young, Charles; Young, Christopher John; Youssef, Saul; Yu, David Ren-Hwa; Yu, Jaehoon; Yu, Jiaming; Yu, Jie; Yuan, Li; Yuen, Stephanie P; Yusuff, Imran; Zabinski, Bartlomiej; Zaidan, Remi; Zaitsev, Alexander; Zakharchuk, Nataliia; Zalieckas, Justas; Zaman, Aungshuman; Zambito, Stefano; Zanello, Lucia; Zanzi, Daniele; Zeitnitz, Christian; Zeman, Martin; Zemla, Andrzej; Zeng, Jian Cong; Zeng, Qi; Zengel, Keith; Zenin, Oleg; Ženiš, Tibor; Zerwas, Dirk; Zhang, Dongliang; Zhang, Fangzhou; Zhang, Guangyi; Zhang, Huijun; Zhang, Jinlong; Zhang, Lei; Zhang, Rui; Zhang, Ruiqi; Zhang, Xueyao; Zhang, Zhiqing; Zhao, Xiandong; Zhao, Yongke; Zhao, Zhengguo; Zhemchugov, Alexey; Zhong, Jiahang; Zhou, Bing; Zhou, Chen; Zhou, Lei; Zhou, Li; Zhou, Mingliang; Zhou, Ning; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Junjie; Zhu, Yingchun; Zhuang, Xuai; Zhukov, Konstantin; Zibell, Andre; Zieminska, Daria; Zimine, Nikolai; Zimmermann, Christoph; Zimmermann, Stephanie; Zinonos, Zinonas; Zinser, Markus; Ziolkowski, Michael; Živković, Lidija; Zobernig, Georg; Zoccoli, Antonio; zur Nedden, Martin; Zwalinski, Lukasz

    2016-10-10

    A measurement of the total $pp$ cross section at the LHC at $\\sqrt{s}=8$ TeV is presented. An integrated luminosity of $500$ $\\mu$b$^{-1}$ was accumulated in a special run with high-$\\beta^{\\star}$ beam optics to measure the differential elastic cross section as a function of the Mandelstam momentum transfer variable $t$. The measurement is performed with the ALFA sub-detector of ATLAS. Using a fit to the differential elastic cross section in the $-t$ range from $0.014$ GeV$^2$ to $0.1$ GeV$^2$ to extrapolate $t\\rightarrow 0$, the total cross section, $\\sigma_{\\mathrm{tot}}(pp\\rightarrow X)$, is measured via the optical theorem to be: $\\sigma_{\\mathrm{tot}}(pp\\rightarrow X) = \\mbox{96.07} \\; \\pm 0.18 \\; ({\\mbox{stat.}}) \\pm 0.85 \\; ({\\mbox{exp.}}) \\pm 0.31 \\; (\\mbox{extr.}) \\; \\mbox{mb} \\;,$ where the first error is statistical, the second accounts for all experimental systematic uncertainties and the last is related to uncertainties in the extrapolation $t\\rightarrow 0$. In addition, the slope of the exponen...

  5. MO-E-BRE-01: Determination, Minimization and Communication of Uncertainties in Radiation Therapy

    International Nuclear Information System (INIS)

    Van Dyk, J; Palta, J; Bortfeld, T; Mijnheer, B

    2014-01-01

    Medical Physicists have a general understanding of uncertainties in the radiation treatment process, both with respect to dosimetry and geometry. However, there is a desire to be more quantitative about uncertainty estimation. A recent International Atomic Energy Agency (IAEA) report (about to be published) recommends that we should be as “accurate as reasonably achievable, technical and biological factors being taken into account”. Thus, a single recommendation as a goal for accuracy in radiation therapy is an oversimplification. That report also suggests that individual clinics should determine their own level of uncertainties for their specific treatment protocols. The question is “how do we implement this in clinical practice”? AAPM Monograph 35 (2011 AAPM Summer School) addressed many specific aspects of uncertainties in each of the steps of a course of radiation treatment. The intent of this symposium is: (1) to review uncertainty considerations in the entire radiation treatment process including uncertainty determination for each step and uncertainty propagation for the total process, (2) to consider aspects of robust optimization which optimizes treatment plans while protecting them against uncertainties, and (3) to describe various methods of displaying uncertainties and communicating uncertainties to the relevant professionals. While the theoretical and research aspects will also be described, the emphasis will be on the practical considerations for the medical physicist in clinical practice. Learning Objectives: To review uncertainty determination in the overall radiation treatment process. To consider uncertainty modeling and uncertainty propagation. To highlight the basic ideas and clinical potential of robust optimization procedures to generate optimal treatment plans that are not severely affected by uncertainties. To describe methods of uncertainty communication and display

  6. MO-E-BRE-01: Determination, Minimization and Communication of Uncertainties in Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Van Dyk, J; Palta, J; Bortfeld, T; Mijnheer, B [Western University, London, ON (United Kingdom)

    2014-06-15

    Medical Physicists have a general understanding of uncertainties in the radiation treatment process, both with respect to dosimetry and geometry. However, there is a desire to be more quantitative about uncertainty estimation. A recent International Atomic Energy Agency (IAEA) report (about to be published) recommends that we should be as “accurate as reasonably achievable, technical and biological factors being taken into account”. Thus, a single recommendation as a goal for accuracy in radiation therapy is an oversimplification. That report also suggests that individual clinics should determine their own level of uncertainties for their specific treatment protocols. The question is “how do we implement this in clinical practice”? AAPM Monograph 35 (2011 AAPM Summer School) addressed many specific aspects of uncertainties in each of the steps of a course of radiation treatment. The intent of this symposium is: (1) to review uncertainty considerations in the entire radiation treatment process including uncertainty determination for each step and uncertainty propagation for the total process, (2) to consider aspects of robust optimization which optimizes treatment plans while protecting them against uncertainties, and (3) to describe various methods of displaying uncertainties and communicating uncertainties to the relevant professionals. While the theoretical and research aspects will also be described, the emphasis will be on the practical considerations for the medical physicist in clinical practice. Learning Objectives: To review uncertainty determination in the overall radiation treatment process. To consider uncertainty modeling and uncertainty propagation. To highlight the basic ideas and clinical potential of robust optimization procedures to generate optimal treatment plans that are not severely affected by uncertainties. To describe methods of uncertainty communication and display.

  7. Track benchmarking method for uncertainty quantification of particle tracking velocimetry interpolations

    International Nuclear Information System (INIS)

    Schneiders, Jan F G; Sciacchitano, Andrea

    2017-01-01

    The track benchmarking method (TBM) is proposed for uncertainty quantification of particle tracking velocimetry (PTV) data mapped onto a regular grid. The method provides statistical uncertainty for a velocity time-series and can in addition be used to obtain instantaneous uncertainty at increased computational cost. Interpolation techniques are typically used to map velocity data from scattered PTV (e.g. tomographic PTV and Shake-the-Box) measurements onto a Cartesian grid. Recent examples of these techniques are the FlowFit and VIC+  methods. The TBM approach estimates the random uncertainty in dense velocity fields by performing the velocity interpolation using a subset of typically 95% of the particle tracks and by considering the remaining tracks as an independent benchmarking reference. In addition, also a bias introduced by the interpolation technique is identified. The numerical assessment shows that the approach is accurate when particle trajectories are measured over an extended number of snapshots, typically on the order of 10. When only short particle tracks are available, the TBM estimate overestimates the measurement error. A correction to TBM is proposed and assessed to compensate for this overestimation. The experimental assessment considers the case of a jet flow, processed both by tomographic PIV and by VIC+. The uncertainty obtained by TBM provides a quantitative evaluation of the measurement accuracy and precision and highlights the regions of high error by means of bias and random uncertainty maps. In this way, it is possible to quantify the uncertainty reduction achieved by advanced interpolation algorithms with respect to standard correlation-based tomographic PIV. The use of TBM for uncertainty quantification and comparison of different processing techniques is demonstrated. (paper)

  8. Effect of uncertainty parameters on graphene sheets Young's modulus prediction

    International Nuclear Information System (INIS)

    Sahlaoui, Habib; Sidhom Habib; Guedri, Mohamed

    2013-01-01

    Software based on molecular structural mechanics approach (MSMA) and using finite element method (FEM) has been developed to predict the Young's modulus of graphene sheets. Obtained results have been compared to results available in the literature and good agreement has been shown when the same values of uncertainty parameters are used. A sensibility of the models to their uncertainty parameters has been investigated using a stochastic finite element method (SFEM). The different values of the used uncertainty parameters, such as molecular mechanics force field constants k_r and k_θ, thickness (t) of a graphene sheet and length ( L_B) of a carbon carbon bonds, have been collected from the literature. Strong sensibilities of 91% to the thickness and of 21% to the stretching force (k_r) have been shown. The results justify the great difference between Young's modulus predicted values of the graphene sheets and their large disagreement with experimental results.

  9. Uncertainty of climate change impact on groundwater reserves - Application to a chalk aquifer

    Science.gov (United States)

    Goderniaux, Pascal; Brouyère, Serge; Wildemeersch, Samuel; Therrien, René; Dassargues, Alain

    2015-09-01

    Recent studies have evaluated the impact of climate change on groundwater resources for different geographical and climatic contexts. However, most studies have either not estimated the uncertainty around projected impacts or have limited the analysis to the uncertainty related to climate models. In this study, the uncertainties around impact projections from several sources (climate models, natural variability of the weather, hydrological model calibration) are calculated and compared for the Geer catchment (465 km2) in Belgium. We use a surface-subsurface integrated model implemented using the finite element code HydroGeoSphere, coupled with climate change scenarios (2010-2085) and the UCODE_2005 inverse model, to assess the uncertainty related to the calibration of the hydrological model. This integrated model provides a more realistic representation of the water exchanges between surface and subsurface domains and constrains more the calibration with the use of both surface and subsurface observed data. Sensitivity and uncertainty analyses were performed on predictions. The linear uncertainty analysis is approximate for this nonlinear system, but it provides some measure of uncertainty for computationally demanding models. Results show that, for the Geer catchment, the most important uncertainty is related to calibration of the hydrological model. The total uncertainty associated with the prediction of groundwater levels remains large. By the end of the century, however, the uncertainty becomes smaller than the predicted decline in groundwater levels.

  10. Uncertainty in measurements in practice ionization chamber; Incerteza nas medidas realizadas pela pratica da camara de ionizacao

    Energy Technology Data Exchange (ETDEWEB)

    Sales, Emer; Pinto, Fernando Sandi; Sousa Junior, Samuel Facanha; Freitas, Dayslon Luiz Gaudaret; Andrade, Lucio das Chagas de, E-mail: fernandopintofis@gmail.com [Universidade do Estado do Rio de Janeiro (UERJ), RJ (Brazil)

    2016-07-01

    The calculation of uncertainty is a mathematical tool widely used in the analysis of experimental data, ensuring that the values obtained by measuring equipment are the most accurate and close to the possible real. This paper presents a theoretical review of uncertainty, and with application of objective determination of uncertainty for repeatability and reproducibility of processes measuring for determining dose of a radioactive source, in practice ionization chamber, held at the Professional Master of Medical Physics State University of Rio de Janeiro. (author)

  11. Radial core expansion reactivity feedback in advanced LMRs: uncertainties and their effects on inherent safety

    International Nuclear Information System (INIS)

    Wigeland, R.A.; Moran, T.J.

    1988-01-01

    An analytical model for calculating radial core expansion, based on the thermal and elastic bowing of a single subassembly at the core periphery, is used to quantify the effect of uncertainties on this reactivity feedback mechanism. This model has been verified and validated with experimental and numerical results. The impact of these uncertainties on the safety margins in unprotected transients is investigated with SASSYS/SAS4A, which includes this model for calculating the reactivity feedback from radial core expansion. The magnitudes of these uncertainties are not sufficient to preclude the use of radial core expansion reactivity feedback in transient analysis

  12. Information Seeking in Uncertainty Management Theory: Exposure to Information About Medical Uncertainty and Information-Processing Orientation as Predictors of Uncertainty Management Success.

    Science.gov (United States)

    Rains, Stephen A; Tukachinsky, Riva

    2015-01-01

    Uncertainty management theory outlines the processes through which individuals cope with health-related uncertainty. Information seeking has been frequently documented as an important uncertainty management strategy. The reported study investigates exposure to specific types of medical information during a search, and one's information-processing orientation as predictors of successful uncertainty management (i.e., a reduction in the discrepancy between the level of uncertainty one feels and the level one desires). A lab study was conducted in which participants were primed to feel more or less certain about skin cancer and then were allowed to search the World Wide Web for skin cancer information. Participants' search behavior was recorded and content analyzed. The results indicate that exposure to two health communication constructs that pervade medical forms of uncertainty (i.e., severity and susceptibility) and information-processing orientation predicted uncertainty management success.

  13. Conditional uncertainty principle

    Science.gov (United States)

    Gour, Gilad; Grudka, Andrzej; Horodecki, Michał; Kłobus, Waldemar; Łodyga, Justyna; Narasimhachar, Varun

    2018-04-01

    We develop a general operational framework that formalizes the concept of conditional uncertainty in a measure-independent fashion. Our formalism is built upon a mathematical relation which we call conditional majorization. We define conditional majorization and, for the case of classical memory, we provide its thorough characterization in terms of monotones, i.e., functions that preserve the partial order under conditional majorization. We demonstrate the application of this framework by deriving two types of memory-assisted uncertainty relations, (1) a monotone-based conditional uncertainty relation and (2) a universal measure-independent conditional uncertainty relation, both of which set a lower bound on the minimal uncertainty that Bob has about Alice's pair of incompatible measurements, conditioned on arbitrary measurement that Bob makes on his own system. We next compare the obtained relations with their existing entropic counterparts and find that they are at least independent.

  14. IAEA Coordinated Research Project on HTGR Reactor Physics, Thermal-hydraulics and Depletion Uncertainty Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bostelmann, F. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of HTGR design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The predictive capability of coupled neutronics/thermal-hydraulics and depletion simulations for reactor design and safety analysis can be assessed with sensitivity analysis (SA) and uncertainty analysis (UA) methods. Uncertainty originates from errors in physical data, manufacturing uncertainties, modelling and computational algorithms. (The interested reader is referred to the large body of published SA and UA literature for a more complete overview of the various types of uncertainties, methodologies and results obtained). SA is helpful for ranking the various sources of uncertainty and error in the results of core analyses. SA and UA are required to address cost, safety, and licensing needs and should be applied to all aspects of reactor multi-physics simulation. SA and UA can guide experimental, modelling, and algorithm research and development. Current SA and UA rely either on derivative-based methods such as stochastic sampling methods or on generalized perturbation theory to obtain sensitivity coefficients. Neither approach addresses all needs. In order to benefit from recent advances in modelling and simulation and the availability of new covariance data (nuclear data uncertainties) extensive sensitivity and uncertainty studies are needed for quantification of the impact of different sources of uncertainties on the design and safety parameters of HTGRs. Only a parallel effort in advanced simulation and in nuclear data improvement will be able to provide designers with more robust and well validated calculation tools to meet design target accuracies. In February 2009, the Technical Working Group on Gas-Cooled Reactors (TWG-GCR) of the International Atomic Energy Agency (IAEA) recommended that the proposed Coordinated Research Program (CRP) on

  15. Uncertainty analysis of time-dependent nonlinear systems: theory and application to transient thermal hydraulics

    International Nuclear Information System (INIS)

    Barhen, J.; Bjerke, M.A.; Cacuci, D.G.; Mullins, C.B.; Wagschal, G.G.

    1982-01-01

    An advanced methodology for performing systematic uncertainty analysis of time-dependent nonlinear systems is presented. This methodology includes a capability for reducing uncertainties in system parameters and responses by using Bayesian inference techniques to consistently combine prior knowledge with additional experimental information. The determination of best estimates for the system parameters, for the responses, and for their respective covariances is treated as a time-dependent constrained minimization problem. Three alternative formalisms for solving this problem are developed. The two ''off-line'' formalisms, with and without ''foresight'' characteristics, require the generation of a complete sensitivity data base prior to performing the uncertainty analysis. The ''online'' formalism, in which uncertainty analysis is performed interactively with the system analysis code, is best suited for treatment of large-scale highly nonlinear time-dependent problems. This methodology is applied to the uncertainty analysis of a transient upflow of a high pressure water heat transfer experiment. For comparison, an uncertainty analysis using sensitivities computed by standard response surface techniques is also performed. The results of the analysis indicate the following. Major reduction of the discrepancies in the calculation/experiment ratios is achieved by using the new methodology. Incorporation of in-bundle measurements in the uncertainty analysis significantly reduces system uncertainties. Accuracy of sensitivities generated by response-surface techniques should be carefully assessed prior to using them as a basis for uncertainty analyses of transient reactor safety problems

  16. Experimental Object-Oriented Modelling

    DEFF Research Database (Denmark)

    Hansen, Klaus Marius

    through, e.g., technical prototyping and active user involvement. We introduce and examine “experimental object-oriented modelling” as the intersection of these practices. The contributions of this thesis are expected to be within three perspectives on models and modelling in experimental system...... development: Grounding We develop an empirically based conceptualization of modelling and use of models in system development projects characterized by a high degree of uncertainty in requirements and point to implications for tools and techniques for modelling in such a setting. Techniques We introduce......This thesis examines object-oriented modelling in experimental system development. Object-oriented modelling aims at representing concepts and phenomena of a problem domain in terms of classes and objects. Experimental system development seeks active experimentation in a system development project...

  17. A Nondominated Genetic Algorithm Procedure for Multiobjective Discrete Network Design under Demand Uncertainty

    Directory of Open Access Journals (Sweden)

    Bian Changzhi

    2015-01-01

    Full Text Available This paper addresses the multiobjective discrete network design problem under demand uncertainty. The OD travel demands are supposed to be random variables with the given probability distribution. The problem is formulated as a bilevel stochastic optimization model where the decision maker’s objective is to minimize the construction cost, the expectation, and the standard deviation of total travel time simultaneously and the user’s route choice is described using user equilibrium model on the improved network under all scenarios of uncertain demand. The proposed model generates globally near-optimal Pareto solutions for network configurations based on the Monte Carlo simulation and nondominated sorting genetic algorithms II. Numerical experiments implemented on Nguyen-Dupuis test network show trade-offs among construction cost, the expectation, and standard deviation of total travel time under uncertainty are obvious. Investment on transportation facilities is an efficient method to improve the network performance and reduce risk under demand uncertainty, but it has an obvious marginal decreasing effect.

  18. Uncertainty and Cognitive Control

    Directory of Open Access Journals (Sweden)

    Faisal eMushtaq

    2011-10-01

    Full Text Available A growing trend of neuroimaging, behavioural and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1 There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2 There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3 The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the need for control; (4 Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders.

  19. Uncertainty, probability and information-gaps

    International Nuclear Information System (INIS)

    Ben-Haim, Yakov

    2004-01-01

    This paper discusses two main ideas. First, we focus on info-gap uncertainty, as distinct from probability. Info-gap theory is especially suited for modelling and managing uncertainty in system models: we invest all our knowledge in formulating the best possible model; this leaves the modeller with very faulty and fragmentary information about the variation of reality around that optimal model. Second, we examine the interdependence between uncertainty modelling and decision-making. Good uncertainty modelling requires contact with the end-use, namely, with the decision-making application of the uncertainty model. The most important avenue of uncertainty-propagation is from initial data- and model-uncertainties into uncertainty in the decision-domain. Two questions arise. Is the decision robust to the initial uncertainties? Is the decision prone to opportune windfall success? We apply info-gap robustness and opportunity functions to the analysis of representation and propagation of uncertainty in several of the Sandia Challenge Problems

  20. Individual uncertainty and the uncertainty of science: The impact of perceived conflict and general self-efficacy on the perception of tentativeness and credibility of scientific information

    Directory of Open Access Journals (Sweden)

    Danny eFlemming

    2015-12-01

    Full Text Available We examined in two empirical studies how situational and personal aspects of uncertainty influence laypeople’s understanding of the uncertainty of scientific information, with focus on the detection of tentativeness and perception of scientific credibility. In the first study (N = 48, we investigated the impact of a perceived conflict due to contradicting information as a situational, text-inherent aspect of uncertainty. The aim of the second study (N = 61 was to explore the role of general self-efficacy as an intra-personal uncertainty factor. In Study 1, participants read one of two versions of an introductory text in a between-group design. This text provided them with an overview about the neurosurgical procedure of deep brain stimulation (DBS. The text expressed a positive attitude toward DBS in one experimental condition or focused on the negative aspects of this method in the other condition. Then participants in both conditions read the same text that dealt with a study about DBS as experimental treatment in a small sample of patients with major depression. Perceived conflict between the two texts was found to increase the perception of tentativeness and to decrease the perception of scientific credibility, implicating that text-inherent aspects have significant effects on critical appraisal. The results of Study 2 demonstrated that participants with higher general self-efficacy detected the tentativeness to a lesser degree and assumed a higher level of scientific credibility, indicating a more naïve understanding of scientific information. This appears to be contradictory to large parts of previous findings that showed positive effects of high self-efficacy on learning. Both studies showed that perceived tentativeness and perceived scientific credibility of medical information contradicted each other. We conclude that there is a need for supporting laypeople in understanding the uncertainty of scientific information and that

  1. Fundamental uncertainty limit of optical flow velocimetry according to Heisenberg's uncertainty principle.

    Science.gov (United States)

    Fischer, Andreas

    2016-11-01

    Optical flow velocity measurements are important for understanding the complex behavior of flows. Although a huge variety of methods exist, they are either based on a Doppler or a time-of-flight measurement principle. Doppler velocimetry evaluates the velocity-dependent frequency shift of light scattered at a moving particle, whereas time-of-flight velocimetry evaluates the traveled distance of a scattering particle per time interval. Regarding the aim of achieving a minimal measurement uncertainty, it is unclear if one principle allows to achieve lower uncertainties or if both principles can achieve equal uncertainties. For this reason, the natural, fundamental uncertainty limit according to Heisenberg's uncertainty principle is derived for Doppler and time-of-flight measurement principles, respectively. The obtained limits of the velocity uncertainty are qualitatively identical showing, e.g., a direct proportionality for the absolute value of the velocity to the power of 32 and an indirect proportionality to the square root of the scattered light power. Hence, both measurement principles have identical potentials regarding the fundamental uncertainty limit due to the quantum mechanical behavior of photons. This fundamental limit can be attained (at least asymptotically) in reality either with Doppler or time-of-flight methods, because the respective Cramér-Rao bounds for dominating photon shot noise, which is modeled as white Poissonian noise, are identical with the conclusions from Heisenberg's uncertainty principle.

  2. Uncertainty Quantification of Turbulence Model Closure Coefficients for Transonic Wall-Bounded Flows

    Science.gov (United States)

    Schaefer, John; West, Thomas; Hosder, Serhat; Rumsey, Christopher; Carlson, Jan-Renee; Kleb, William

    2015-01-01

    The goal of this work was to quantify the uncertainty and sensitivity of commonly used turbulence models in Reynolds-Averaged Navier-Stokes codes due to uncertainty in the values of closure coefficients for transonic, wall-bounded flows and to rank the contribution of each coefficient to uncertainty in various output flow quantities of interest. Specifically, uncertainty quantification of turbulence model closure coefficients was performed for transonic flow over an axisymmetric bump at zero degrees angle of attack and the RAE 2822 transonic airfoil at a lift coefficient of 0.744. Three turbulence models were considered: the Spalart-Allmaras Model, Wilcox (2006) k-w Model, and the Menter Shear-Stress Trans- port Model. The FUN3D code developed by NASA Langley Research Center was used as the flow solver. The uncertainty quantification analysis employed stochastic expansions based on non-intrusive polynomial chaos as an efficient means of uncertainty propagation. Several integrated and point-quantities are considered as uncertain outputs for both CFD problems. All closure coefficients were treated as epistemic uncertain variables represented with intervals. Sobol indices were used to rank the relative contributions of each closure coefficient to the total uncertainty in the output quantities of interest. This study identified a number of closure coefficients for each turbulence model for which more information will reduce the amount of uncertainty in the output significantly for transonic, wall-bounded flows.

  3. Propagation of nuclear data uncertainties in fuel cycle calculations using Monte-Carlo technique

    International Nuclear Information System (INIS)

    Diez, C.J.; Cabellos, O.; Martinez, J.S.

    2011-01-01

    Nowadays, the knowledge of uncertainty propagation in depletion calculations is a critical issue because of the safety and economical performance of fuel cycles. Response magnitudes such as decay heat, radiotoxicity and isotopic inventory and their uncertainties should be known to handle spent fuel in present fuel cycles (e.g. high burnup fuel programme) and furthermore in new fuel cycles designs (e.g. fast breeder reactors and ADS). To deal with this task, there are different error propagation techniques, deterministic (adjoint/forward sensitivity analysis) and stochastic (Monte-Carlo technique) to evaluate the error in response magnitudes due to nuclear data uncertainties. In our previous works, cross-section uncertainties were propagated using a Monte-Carlo technique to calculate the uncertainty of response magnitudes such as decay heat and neutron emission. Also, the propagation of decay data, fission yield and cross-section uncertainties was performed, but only isotopic composition was the response magnitude calculated. Following the previous technique, the nuclear data uncertainties are taken into account and propagated to response magnitudes, decay heat and radiotoxicity. These uncertainties are assessed during cooling time. To evaluate this Monte-Carlo technique, two different applications are performed. First, a fission pulse decay heat calculation is carried out to check the Monte-Carlo technique, using decay data and fission yields uncertainties. Then, the results, experimental data and reference calculation (JEFF Report20), are compared. Second, we assess the impact of basic nuclear data (activation cross-section, decay data and fission yields) uncertainties on relevant fuel cycle parameters (decay heat and radiotoxicity) for a conceptual design of a modular European Facility for Industrial Transmutation (EFIT) fuel cycle. After identifying which time steps have higher uncertainties, an assessment of which uncertainties have more relevance is performed

  4. Propagation of registration uncertainty during multi-fraction cervical cancer brachytherapy

    Science.gov (United States)

    Amir-Khalili, A.; Hamarneh, G.; Zakariaee, R.; Spadinger, I.; Abugharbieh, R.

    2017-10-01

    Multi-fraction cervical cancer brachytherapy is a form of image-guided radiotherapy that heavily relies on 3D imaging during treatment planning, delivery, and quality control. In this context, deformable image registration can increase the accuracy of dosimetric evaluations, provided that one can account for the uncertainties associated with the registration process. To enable such capability, we propose a mathematical framework that first estimates the registration uncertainty and subsequently propagates the effects of the computed uncertainties from the registration stage through to the visualizations, organ segmentations, and dosimetric evaluations. To ensure the practicality of our proposed framework in real world image-guided radiotherapy contexts, we implemented our technique via a computationally efficient and generalizable algorithm that is compatible with existing deformable image registration software. In our clinical context of fractionated cervical cancer brachytherapy, we perform a retrospective analysis on 37 patients and present evidence that our proposed methodology for computing and propagating registration uncertainties may be beneficial during therapy planning and quality control. Specifically, we quantify and visualize the influence of registration uncertainty on dosimetric analysis during the computation of the total accumulated radiation dose on the bladder wall. We further show how registration uncertainty may be leveraged into enhanced visualizations that depict the quality of the registration and highlight potential deviations from the treatment plan prior to the delivery of radiation treatment. Finally, we show that we can improve the transfer of delineated volumetric organ segmentation labels from one fraction to the next by encoding the computed registration uncertainties into the segmentation labels.

  5. Communicating uncertainty in hydrological forecasts: mission impossible?

    Science.gov (United States)

    Ramos, Maria-Helena; Mathevet, Thibault; Thielen, Jutta; Pappenberger, Florian

    2010-05-01

    Cascading uncertainty in meteo-hydrological modelling chains for forecasting and integrated flood risk assessment is an essential step to improve the quality of hydrological forecasts. Although the best methodology to quantify the total predictive uncertainty in hydrology is still debated, there is a common agreement that one must avoid uncertainty misrepresentation and miscommunication, as well as misinterpretation of information by users. Several recent studies point out that uncertainty, when properly explained and defined, is no longer unwelcome among emergence response organizations, users of flood risk information and the general public. However, efficient communication of uncertain hydro-meteorological forecasts is far from being a resolved issue. This study focuses on the interpretation and communication of uncertain hydrological forecasts based on (uncertain) meteorological forecasts and (uncertain) rainfall-runoff modelling approaches to decision-makers such as operational hydrologists and water managers in charge of flood warning and scenario-based reservoir operation. An overview of the typical flow of uncertainties and risk-based decisions in hydrological forecasting systems is presented. The challenges related to the extraction of meaningful information from probabilistic forecasts and the test of its usefulness in assisting operational flood forecasting are illustrated with the help of two case-studies: 1) a study on the use and communication of probabilistic flood forecasting within the European Flood Alert System; 2) a case-study on the use of probabilistic forecasts by operational forecasters from the hydroelectricity company EDF in France. These examples show that attention must be paid to initiatives that promote or reinforce the active participation of expert forecasters in the forecasting chain. The practice of face-to-face forecast briefings, focusing on sharing how forecasters interpret, describe and perceive the model output forecasted

  6. An uncertainty analysis using the NRPB accident consequence code Marc

    International Nuclear Information System (INIS)

    Jones, J.A.; Crick, M.J.; Simmonds, J.R.

    1991-01-01

    This paper describes an uncertainty analysis of MARC calculations of the consequences of accidental releases of radioactive materials to atmosphere. A total of 98 parameters describing the transfer of material through the environment to man, the doses received, and the health effects resulting from these doses, was considered. The uncertainties in the numbers of early and late health effects, numbers of people affected by countermeasures, the amounts of food restricted and the economic costs of the accident were estimated. This paper concentrates on the results for early death and fatal cancer for a large hypothetical release from a PWR

  7. Greenhouse gas scenario sensitivity and uncertainties in precipitation projections for central Belgium

    Science.gov (United States)

    Van Uytven, E.; Willems, P.

    2018-03-01

    Climate change impact assessment on meteorological variables involves large uncertainties as a result of incomplete knowledge on the future greenhouse gas concentrations and climate model physics, next to the inherent internal variability of the climate system. Given that the alteration in greenhouse gas concentrations is the driver for the change, one expects the impacts to be highly dependent on the considered greenhouse gas scenario (GHS). In this study, we denote this behavior as GHS sensitivity. Due to the climate model related uncertainties, this sensitivity is, at local scale, not always that strong as expected. This paper aims to study the GHS sensitivity and its contributing role to climate scenarios for a case study in Belgium. An ensemble of 160 CMIP5 climate model runs is considered and climate change signals are studied for precipitation accumulation, daily precipitation intensities and wet day frequencies. This was done for the different seasons of the year and the scenario periods 2011-2040, 2031-2060, 2051-2081 and 2071-2100. By means of variance decomposition, the total variance in the climate change signals was separated in the contribution of the differences in GHSs and the other model-related uncertainty sources. These contributions were found dependent on the variable and season. Following the time of emergence concept, the GHS uncertainty contribution is found dependent on the time horizon and increases over time. For the most distinct time horizon (2071-2100), the climate model uncertainty accounts for the largest uncertainty contribution. The GHS differences explain up to 18% of the total variance in the climate change signals. The results point further at the importance of the climate model ensemble design, specifically the ensemble size and the combination of climate models, whereupon climate scenarios are based. The numerical noise, introduced at scales smaller than the skillful scale, e.g. at local scale, was not considered in this study.

  8. An Experimental and Theoretical Study of CO2 Hydrate Formation Systems

    DEFF Research Database (Denmark)

    Tzirakis, Fragkiskos

    , the simultaneous combination of these chemicals achieved greater pressure reduction than if they were used separately. Then, experimental uncertainties were measured (for pressure/temperature transducers and gas chromatograph) and calculated (for the inserted quantities of water and chemicals). The uncertainties...

  9. On total noncommutativity in quantum mechanics

    Science.gov (United States)

    Lahti, Pekka J.; Ylinen, Kari

    1987-11-01

    It is shown within the Hilbert space formulation of quantum mechanics that the total noncommutativity of any two physical quantities is necessary for their satisfying the uncertainty relation or for their being complementary. The importance of these results is illustrated with the canonically conjugate position and momentum of a free particle and of a particle closed in a box.

  10. A Variation on Uncertainty Principle and Logarithmic Uncertainty Principle for Continuous Quaternion Wavelet Transforms

    Directory of Open Access Journals (Sweden)

    Mawardi Bahri

    2017-01-01

    Full Text Available The continuous quaternion wavelet transform (CQWT is a generalization of the classical continuous wavelet transform within the context of quaternion algebra. First of all, we show that the directional quaternion Fourier transform (QFT uncertainty principle can be obtained using the component-wise QFT uncertainty principle. Based on this method, the directional QFT uncertainty principle using representation of polar coordinate form is easily derived. We derive a variation on uncertainty principle related to the QFT. We state that the CQWT of a quaternion function can be written in terms of the QFT and obtain a variation on uncertainty principle related to the CQWT. Finally, we apply the extended uncertainty principles and properties of the CQWT to establish logarithmic uncertainty principles related to generalized transform.

  11. Saccadic Suppression of Flash Detection: the Uncertainty Theory VS. Alternative Theories.

    Science.gov (United States)

    Greenhouse, Daniel Stephen

    involved the generation of receiver operating characteristic (ROC) curves. The results, interpreted within the framework of the Theory of Signal Detectability, served to establish the presence of uncertainty for two of four subjects. The magnitude of uncertainty, estimated from the ROC curves, was comparable with that which could account for the decline in detectability observed in earlier experiments, and we concluded that uncertainty could account entirely for suppression in these subjects. In the final experiment, we employed spatially separate marker flashes as cues in an attempt to reduce uncertainty. For one of two subjects, detectability of a stimulus presented during a saccade improved substantially when the markers were employed. This result was interpreted in terms of the uncertainty theory. The evidence, in total, leads us to conclude that, with respect to other theories which have appeared in the literature, the uncertainty theory of saccadic suppression is a viable alternative. ('1)Helmholtz, H. (1866) A Treatise on Physiological Optics, Vol. 3, Dover Publications, New York (1963). ('2)Matin, L., Matin, E., and Pearce, D. (1969) Perception and Psychophysics 5, 65-80. ('3)Matin, L., Matin, E., and Pola, J. (1970) Perception and Psychophysics 8, 9-14. ('4)Matin, L. and Matin, E. (1972) Bibliotheca Ophtalmologica 82, 358-368. ('5)Cohn, T. C. and Lasley, D. J. (1974) J. Opt. Soc. Am. 64, 1715-1719. ('6)Lasley, D. J., Greenhouse, D. S., and Cohn, T. C. (1976), J. Opt. Soc. Am. 66, 1079 (abstract). ('7)Greenhouse, D. S. and Cohn, T. C. (1978) J. Opt. Soc. Am. 68, 266-267. ('8)Matin, L. (1965) Personal communication to E. Matin reported in Matin, E. (1974), Psychological Bulletin 81, 899-917. ('9)Latour, P. (1962), Vision Research 2, 261-262. ('10)Volkmann, F. (1962), J. Opt. Soc. Am. 52, 571-578. ('11)Zuber, B. and Stark, L. (1966) Experimental Neurology 16, 65-79. ('12)Richards, W. (1969), J. Opt. Soc. Am. 59, 617-623. ('13)Matin, E., Clymer, A., and Matin, L

  12. On the relationship between aerosol model uncertainty and radiative forcing uncertainty.

    Science.gov (United States)

    Lee, Lindsay A; Reddington, Carly L; Carslaw, Kenneth S

    2016-05-24

    The largest uncertainty in the historical radiative forcing of climate is caused by the interaction of aerosols with clouds. Historical forcing is not a directly measurable quantity, so reliable assessments depend on the development of global models of aerosols and clouds that are well constrained by observations. However, there has been no systematic assessment of how reduction in the uncertainty of global aerosol models will feed through to the uncertainty in the predicted forcing. We use a global model perturbed parameter ensemble to show that tight observational constraint of aerosol concentrations in the model has a relatively small effect on the aerosol-related uncertainty in the calculated forcing between preindustrial and present-day periods. One factor is the low sensitivity of present-day aerosol to natural emissions that determine the preindustrial aerosol state. However, the major cause of the weak constraint is that the full uncertainty space of the model generates a large number of model variants that are equally acceptable compared to present-day aerosol observations. The narrow range of aerosol concentrations in the observationally constrained model gives the impression of low aerosol model uncertainty. However, these multiple "equifinal" models predict a wide range of forcings. To make progress, we need to develop a much deeper understanding of model uncertainty and ways to use observations to constrain it. Equifinality in the aerosol model means that tuning of a small number of model processes to achieve model-observation agreement could give a misleading impression of model robustness.

  13. Jet energy scale uncertainty correlations between ATLAS and CMS at 8 TeV

    CERN Document Server

    CMS and ATLAS Collaborations

    2015-01-01

    An evaluation of the correlations between ATLAS and CMS jet energy scale uncertainties is presented for $\\sqrt{s}=8$ TeV $pp$ collisions recorded in 2012. Uncertainties within each experiment are grouped based on the general type of systematic effect they are intended to cover and the means by which they are derived. Inter-experimental correlation value ranges are established for each corresponding group of uncertainty components. This correlation range is intended to cover the possible correlation values when performing combinations between the two experiments, where the most conservative value obtained from scanning over the correlation range should be used for the final combined measurement. The procedure described here is primarily aimed at single-observable analyses, and has limitations when applied to multi-observable measurements.

  14. Decision-making under great uncertainty

    International Nuclear Information System (INIS)

    Hansson, S.O.

    1992-01-01

    Five types of decision-uncertainty are distinguished: uncertainty of consequences, of values, of demarcation, of reliance, and of co-ordination. Strategies are proposed for each type of uncertainty. The general conclusion is that it is meaningful for decision theory to treat cases with greater uncertainty than the textbook case of 'decision-making under uncertainty'. (au)

  15. Total glucosides of peony attenuates experimental autoimmune encephalomyelitis in C57BL/6 mice.

    Science.gov (United States)

    Huang, Qiling; Ma, Xiaomeng; Zhu, Dong Liang; Chen, Li; Jiang, Ying; Zhou, Linli; Cen, Lei; Pi, Rongbiao; Chen, Xiaohong

    2015-07-15

    Total glucosides of peony (TGP), an active compound extracted from the roots of Paeonia lactiflora Pall, has wide pharmacological effects on nervous system. Here we examined the effects of TGP on experimental autoimmune encephalomyelitis (EAE), an established model of multiple sclerosis (MS). The results showed that TGP can reduce the severity and progression of EAE in C57 BL/6 mice. In addition, TGP also down-regulated the Th1/Th17 inflammatory response and prevented the reduced expression of brain-derived neurotrophic factor and 2',3'-cyclic nucleotide 3'-phosphodiesterase of EAE. These findings suggest that TGP could be a potential therapeutic agent for MS. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Probabilistic Physics of Failure-based framework for fatigue life prediction of aircraft gas turbine discs under uncertainty

    International Nuclear Information System (INIS)

    Zhu, Shun-Peng; Huang, Hong-Zhong; Peng, Weiwen; Wang, Hai-Kun; Mahadevan, Sankaran

    2016-01-01

    A probabilistic Physics of Failure-based framework for fatigue life prediction of aircraft gas turbine discs operating under uncertainty is developed. The framework incorporates the overall uncertainties appearing in a structural integrity assessment. A comprehensive uncertainty quantification (UQ) procedure is presented to quantify multiple types of uncertainty using multiplicative and additive UQ methods. In addition, the factors that contribute the most to the resulting output uncertainty are investigated and identified for uncertainty reduction in decision-making. A high prediction accuracy of the proposed framework is validated through a comparison of model predictions to the experimental results of GH4133 superalloy and full-scale tests of aero engine high-pressure turbine discs. - Highlights: • A probabilistic PoF-based framework for fatigue life prediction is proposed. • A comprehensive procedure forquantifyingmultiple types of uncertaintyis presented. • The factors that contribute most to the resulting output uncertainty are identified. • The proposed frameworkdemonstrates high prediction accuracybyfull-scale tests.

  17. Deterministic sensitivity and uncertainty methodology for best estimate system codes applied in nuclear technology

    International Nuclear Information System (INIS)

    Petruzzi, A.; D'Auria, F.; Cacuci, D.G.

    2009-01-01

    Nuclear Power Plant (NPP) technology has been developed based on the traditional defense in depth philosophy supported by deterministic and overly conservative methods for safety analysis. In the 1970s [1], conservative hypotheses were introduced for safety analyses to address existing uncertainties. Since then, intensive thermal-hydraulic experimental research has resulted in a considerable increase in knowledge and consequently in the development of best-estimate codes able to provide more realistic information about the physical behaviour and to identify the most relevant safety issues allowing the evaluation of the existing actual margins between the results of the calculations and the acceptance criteria. However, the best-estimate calculation results from complex thermal-hydraulic system codes (like Relap5, Cathare, Athlet, Trace, etc..) are affected by unavoidable approximations that are un-predictable without the use of computational tools that account for the various sources of uncertainty. Therefore the use of best-estimate codes (BE) within the reactor technology, either for design or safety purposes, implies understanding and accepting the limitations and the deficiencies of those codes. Taking into consideration the above framework, a comprehensive approach for utilizing quantified uncertainties arising from Integral Test Facilities (ITFs, [2]) and Separate Effect Test Facilities (SETFs, [3]) in the process of calibrating complex computer models for the application to NPP transient scenarios has been developed. The methodology proposed is capable of accommodating multiple SETFs and ITFs to learn as much as possible about uncertain parameters, allowing for the improvement of the computer model predictions based on the available experimental evidences. The proposed methodology constitutes a major step forward with respect to the generally used expert judgment and statistical methods as it permits a) to establish the uncertainties of any parameter

  18. DS02 uncertainty analysis

    International Nuclear Information System (INIS)

    Kaul, Dean C.; Egbert, Stephen D.; Woolson, William A.

    2005-01-01

    In order to avoid the pitfalls that so discredited DS86 and its uncertainty estimates, and to provide DS02 uncertainties that are both defensible and credible, this report not only presents the ensemble uncertainties assembled from uncertainties in individual computational elements and radiation dose components but also describes how these relate to comparisons between observed and computed quantities at critical intervals in the computational process. These comparisons include those between observed and calculated radiation free-field components, where observations include thermal- and fast-neutron activation and gamma-ray thermoluminescence, which are relevant to the estimated systematic uncertainty for DS02. The comparisons also include those between calculated and observed survivor shielding, where the observations consist of biodosimetric measurements for individual survivors, which are relevant to the estimated random uncertainty for DS02. (J.P.N.)

  19. Photovoltaic Calibrations at the National Renewable Energy Laboratory and Uncertainty Analysis Following the ISO 17025 Guidelines

    Energy Technology Data Exchange (ETDEWEB)

    Emery, Keith [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-09-01

    The measurement of photovoltaic (PV) performance with respect to reference conditions requires measuring current versus voltage for a given tabular reference spectrum, junction temperature, and total irradiance. This report presents the procedures implemented by the PV Cell and Module Performance Characterization Group at the National Renewable Energy Laboratory (NREL) to achieve the lowest practical uncertainty. A rigorous uncertainty analysis of these procedures is presented, which follows the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement. This uncertainty analysis is required for the team’s laboratory accreditation under ISO standard 17025, “General Requirements for the Competence of Testing and Calibration Laboratories.” The report also discusses additional areas where the uncertainty can be reduced.

  20. Uncertainty Assessment of Space-Borne Passive Soil Moisture Retrievals

    Science.gov (United States)

    Quets, Jan; De Lannoy, Gabrielle; Reichle, Rolf; Cosh, Michael; van der Schalie, Robin; Wigneron, Jean-Pierre

    2017-01-01

    The uncertainty associated with passive soil moisture retrieval is hard to quantify, and known to be underlain by various, diverse, and complex causes. Factors affecting space-borne retrieved soil moisture estimation include: (i) the optimization or inversion method applied to the radiative transfer model (RTM), such as e.g. the Single Channel Algorithm (SCA), or the Land Parameter Retrieval Model (LPRM), (ii) the selection of the observed brightness temperatures (Tbs), e.g. polarization and incidence angle, (iii) the definition of the cost function and the impact of prior information in it, and (iv) the RTM parameterization (e.g. parameterizations officially used by the SMOS L2 and SMAP L2 retrieval products, ECMWF-based SMOS assimilation product, SMAP L4 assimilation product, and perturbations from those configurations). This study aims at disentangling the relative importance of the above-mentioned sources of uncertainty, by carrying out soil moisture retrieval experiments, using SMOS Tb observations in different settings, of which some are mentioned above. The ensemble uncertainties are evaluated at 11 reference CalVal sites, over a time period of more than 5 years. These experimental retrievals were inter-compared, and further confronted with in situ soil moisture measurements and operational SMOS L2 retrievals, using commonly used skill metrics to quantify the temporal uncertainty in the retrievals.

  1. Application of a Novel Dose-Uncertainty Model for Dose-Uncertainty Analysis in Prostate Intensity-Modulated Radiotherapy

    International Nuclear Information System (INIS)

    Jin Hosang; Palta, Jatinder R.; Kim, You-Hyun; Kim, Siyong

    2010-01-01

    Purpose: To analyze dose uncertainty using a previously published dose-uncertainty model, and to assess potential dosimetric risks existing in prostate intensity-modulated radiotherapy (IMRT). Methods and Materials: The dose-uncertainty model provides a three-dimensional (3D) dose-uncertainty distribution in a given confidence level. For 8 retrospectively selected patients, dose-uncertainty maps were constructed using the dose-uncertainty model at the 95% CL. In addition to uncertainties inherent to the radiation treatment planning system, four scenarios of spatial errors were considered: machine only (S1), S1 + intrafraction, S1 + interfraction, and S1 + both intrafraction and interfraction errors. To evaluate the potential risks of the IMRT plans, three dose-uncertainty-based plan evaluation tools were introduced: confidence-weighted dose-volume histogram, confidence-weighted dose distribution, and dose-uncertainty-volume histogram. Results: Dose uncertainty caused by interfraction setup error was more significant than that of intrafraction motion error. The maximum dose uncertainty (95% confidence) of the clinical target volume (CTV) was smaller than 5% of the prescribed dose in all but two cases (13.9% and 10.2%). The dose uncertainty for 95% of the CTV volume ranged from 1.3% to 2.9% of the prescribed dose. Conclusions: The dose uncertainty in prostate IMRT could be evaluated using the dose-uncertainty model. Prostate IMRT plans satisfying the same plan objectives could generate a significantly different dose uncertainty because a complex interplay of many uncertainty sources. The uncertainty-based plan evaluation contributes to generating reliable and error-resistant treatment plans.

  2. Uncertainty Quantification of Multi-Phase Closures

    Energy Technology Data Exchange (ETDEWEB)

    Nadiga, Balasubramanya T. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Baglietto, Emilio [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2017-10-27

    In the ensemble-averaged dispersed phase formulation used for CFD of multiphase ows in nuclear reactor thermohydraulics, closures of interphase transfer of mass, momentum, and energy constitute, by far, the biggest source of error and uncertainty. Reliable estimators of this source of error and uncertainty are currently non-existent. Here, we report on how modern Validation and Uncertainty Quanti cation (VUQ) techniques can be leveraged to not only quantify such errors and uncertainties, but also to uncover (unintended) interactions between closures of di erent phenomena. As such this approach serves as a valuable aide in the research and development of multiphase closures. The joint modeling of lift, drag, wall lubrication, and turbulent dispersion|forces that lead to tranfer of momentum between the liquid and gas phases|is examined in the frame- work of validation of the adiabatic but turbulent experiments of Liu and Banko , 1993. An extensive calibration study is undertaken with a popular combination of closure relations and the popular k-ϵ turbulence model in a Bayesian framework. When a wide range of super cial liquid and gas velocities and void fractions is considered, it is found that this set of closures can be validated against the experimental data only by allowing large variations in the coe cients associated with the closures. We argue that such an extent of variation is a measure of uncertainty induced by the chosen set of closures. We also nd that while mean uid velocity and void fraction pro les are properly t, uctuating uid velocity may or may not be properly t. This aspect needs to be investigated further. The popular set of closures considered contains ad-hoc components and are undesirable from a predictive modeling point of view. Consequently, we next consider improvements that are being developed by the MIT group under CASL and which remove the ad-hoc elements. We use non-intrusive methodologies for sensitivity analysis and calibration (using

  3. Sources of uncertainty in future changes in local precipitation

    Energy Technology Data Exchange (ETDEWEB)

    Rowell, David P. [Met Office Hadley Centre, Exeter (United Kingdom)

    2012-10-15

    This study considers the large uncertainty in projected changes in local precipitation. It aims to map, and begin to understand, the relative roles of uncertain modelling and natural variability, using 20-year mean data from four perturbed physics or multi-model ensembles. The largest - 280-member - ensemble illustrates a rich pattern in the varying contribution of modelling uncertainty, with similar features found using a CMIP3 ensemble (despite its limited sample size, which restricts it value in this context). The contribution of modelling uncertainty to the total uncertainty in local precipitation change is found to be highest in the deep tropics, particularly over South America, Africa, the east and central Pacific, and the Atlantic. In the moist maritime tropics, the highly uncertain modelling of sea-surface temperature changes is transmitted to a large uncertain modelling of local rainfall changes. Over tropical land and summer mid-latitude continents (and to a lesser extent, the tropical oceans), uncertain modelling of atmospheric processes, land surface processes and the terrestrial carbon cycle all appear to play an additional substantial role in driving the uncertainty of local rainfall changes. In polar regions, inter-model variability of anomalous sea ice drives an uncertain precipitation response, particularly in winter. In all these regions, there is therefore the potential to reduce the uncertainty of local precipitation changes through targeted model improvements and observational constraints. In contrast, over much of the arid subtropical and mid-latitude oceans, over Australia, and over the Sahara in winter, internal atmospheric variability dominates the uncertainty in projected precipitation changes. Here, model improvements and observational constraints will have little impact on the uncertainty of time means shorter than at least 20 years. Last, a supplementary application of the metric developed here is that it can be interpreted as a measure

  4. Uncertainty propagation analysis of an N2O emission model at the plot and landscape scale

    NARCIS (Netherlands)

    Nol, L.; Heuvelink, G.B.M.; Veldkamp, A.; Vries, de W.; Kros, J.

    2010-01-01

    Nitrous oxide (N2O) emission from agricultural land is an important component of the total annual greenhouse gas (GHG) budget. In addition, uncertainties associated with agricultural N2O emissions are large. The goals of this work were (i) to quantify the uncertainties of modelled N2O emissions

  5. Invited Review Article: Measurement uncertainty of linear phase-stepping algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hack, Erwin [EMPA, Laboratory Electronics/Metrology/Reliability, Ueberlandstrasse 129, CH-8600 Duebendorf (Switzerland); Burke, Jan [Australian Centre for Precision Optics, CSIRO (Commonwealth Scientific and Industrial Research Organisation) Materials Science and Engineering, P.O. Box 218, Lindfield, NSW 2070 (Australia)

    2011-06-15

    Phase retrieval techniques are widely used in optics, imaging and electronics. Originating in signal theory, they were introduced to interferometry around 1970. Over the years, many robust phase-stepping techniques have been developed that minimize specific experimental influence quantities such as phase step errors or higher harmonic components of the signal. However, optimizing a technique for a specific influence quantity can compromise its performance with regard to others. We present a consistent quantitative analysis of phase measurement uncertainty for the generalized linear phase stepping algorithm with nominally equal phase stepping angles thereby reviewing and generalizing several results that have been reported in literature. All influence quantities are treated on equal footing, and correlations between them are described in a consistent way. For the special case of classical N-bucket algorithms, we present analytical formulae that describe the combined variance as a function of the phase angle values. For the general Arctan algorithms, we derive expressions for the measurement uncertainty averaged over the full 2{pi}-range of phase angles. We also give an upper bound for the measurement uncertainty which can be expressed as being proportional to an algorithm specific factor. Tabular compilations help the reader to quickly assess the uncertainties that are involved with his or her technique.

  6. Decision-Making under Criteria Uncertainty

    Science.gov (United States)

    Kureychik, V. M.; Safronenkova, I. B.

    2018-05-01

    Uncertainty is an essential part of a decision-making procedure. The paper deals with the problem of decision-making under criteria uncertainty. In this context, decision-making under uncertainty, types and conditions of uncertainty were examined. The decision-making problem under uncertainty was formalized. A modification of the mathematical decision support method under uncertainty via ontologies was proposed. A critical distinction of the developed method is ontology usage as its base elements. The goal of this work is a development of a decision-making method under criteria uncertainty with the use of ontologies in the area of multilayer board designing. This method is oriented to improvement of technical-economic values of the examined domain.

  7. Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model

    KAUST Repository

    Wang, Shitao

    2016-05-27

    Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model\\'s estimates of the plume\\'s trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate\\'s contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.

  8. Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model

    KAUST Repository

    Wang, Shitao; Iskandarani, Mohamed; Srinivasan, Ashwanth; Thacker, W. Carlisle; Winokur, Justin; Knio, Omar

    2016-01-01

    Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model's estimates of the plume's trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate's contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.

  9. Physical Uncertainty Bounds (PUB)

    Energy Technology Data Exchange (ETDEWEB)

    Vaughan, Diane Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Dean L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-03-19

    This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.

  10. Probabilistic analysis of manufacturing uncertainties for an automotive turbocharger centrifugal compressor using numerical and experimental methods

    NARCIS (Netherlands)

    Javed, A.; Kamphues, E.; Hartuc, T.; Pecnik, R.; Van Buijtenen, J.P.

    2015-01-01

    The compressor impellers for mass-produced turbochargers are generally die-casted and machined to their final configuration. Manufacturing uncertainties are inherently introduced as stochastic dimensional deviations in the impeller geometry. These deviations eventually propagate into the compressor

  11. Statistical uncertainties of nondestructive assay for spent nuclear fuel by using nuclear resonance fluorescence

    Energy Technology Data Exchange (ETDEWEB)

    Shizuma, Toshiyuki, E-mail: shizuma.toshiyuki@jaea.go.jp [Quantum Beam Science Directorate, Japan Atomic Energy Agency, Tokai, Ibaraki 319-1195 (Japan); Hayakawa, Takehito; Angell, Christopher T.; Hajima, Ryoichi [Quantum Beam Science Directorate, Japan Atomic Energy Agency, Tokai, Ibaraki 319-1195 (Japan); Minato, Futoshi; Suyama, Kenya [Nuclear Science and Engineering Directorate, Japan Atomic Energy Agency, Tokai, Ibaraki 319-1195 (Japan); Seya, Michio [Integrated Support Center for Nuclear Nonproliferation and Nuclear Security, Japan Atomic Energy Agency, Tokai, Ibaraki 319-1198 (Japan); Johnson, Micah S. [Lawrence Livermore National Laboratory, 7000 East Ave. Livermore, CA 94550 (United States); Department of Physics and Astronomy, San Jose State University, One Washington Square, San Jose, CA 9519 (United States); McNabb, Dennis P. [Lawrence Livermore National Laboratory, 7000 East Ave. Livermore, CA 94550 (United States)

    2014-02-11

    We estimated statistical uncertainties of a nondestructive assay system using nuclear resonance fluorescence (NRF) for spent nuclear fuel including low-concentrations of actinide nuclei with an intense, mono-energetic photon beam. Background counts from radioactive materials inside the spent fuel were calculated with the ORIGEN2.2-UPJ burn-up computer code. Coherent scattering contribution associated with Rayleigh, nuclear Thomson, and Delbrück scattering was also considered. The energy of the coherent scattering overlaps with that of NRF transitions to the ground state. Here, we propose to measure NRF transitions to the first excited state to avoid the coherent scattering contribution. Assuming that the total NRF cross-sections are in the range of 3–100 eV b at excitation energies of 2.25, 3.5, and 5 MeV, statistical uncertainties of the NRF measurement were estimated. We concluded that it is possible to assay 1% actinide content in the spent fuel with 2.2–3.2% statistical precision during 4000 s measurement time for the total integrated cross-section of 30 eV b at excitation energies of 3.5–5 MeV by using a photon beam with an intensity of 10{sup 6} photons/s/eV. We also examined both the experimental and theoretical NRF cross-sections for actinide nuclei. The calculation based on the quasi-particle random phase approximation suggests the existence of strong magnetic dipole resonances at excitation energies ranging from 2 to 6 MeV with the scattering cross-sections of tens eV b around 5 MeV in {sup 238}U.

  12. Uncertainty Propagation in OMFIT

    Science.gov (United States)

    Smith, Sterling; Meneghini, Orso; Sung, Choongki

    2017-10-01

    A rigorous comparison of power balance fluxes and turbulent model fluxes requires the propagation of uncertainties in the kinetic profiles and their derivatives. Making extensive use of the python uncertainties package, the OMFIT framework has been used to propagate covariant uncertainties to provide an uncertainty in the power balance calculation from the ONETWO code, as well as through the turbulent fluxes calculated by the TGLF code. The covariant uncertainties arise from fitting 1D (constant on flux surface) density and temperature profiles and associated random errors with parameterized functions such as a modified tanh. The power balance and model fluxes can then be compared with quantification of the uncertainties. No effort is made at propagating systematic errors. A case study will be shown for the effects of resonant magnetic perturbations on the kinetic profiles and fluxes at the top of the pedestal. A separate attempt at modeling the random errors with Monte Carlo sampling will be compared to the method of propagating the fitting function parameter covariant uncertainties. Work supported by US DOE under DE-FC02-04ER54698, DE-FG2-95ER-54309, DE-SC 0012656.

  13. Effect of precipitation spatial distribution uncertainty on the uncertainty bounds of a snowmelt runoff model output

    Science.gov (United States)

    Jacquin, A. P.

    2012-04-01

    This study analyses the effect of precipitation spatial distribution uncertainty on the uncertainty bounds of a snowmelt runoff model's discharge estimates. Prediction uncertainty bounds are derived using the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. The model analysed is a conceptual watershed model operating at a monthly time step. The model divides the catchment into five elevation zones, where the fifth zone corresponds to the catchment glaciers. Precipitation amounts at each elevation zone i are estimated as the product between observed precipitation (at a single station within the catchment) and a precipitation factor FPi. Thus, these factors provide a simplified representation of the spatial variation of precipitation, specifically the shape of the functional relationship between precipitation and height. In the absence of information about appropriate values of the precipitation factors FPi, these are estimated through standard calibration procedures. The catchment case study is Aconcagua River at Chacabuquito, located in the Andean region of Central Chile. Monte Carlo samples of the model output are obtained by randomly varying the model parameters within their feasible ranges. In the first experiment, the precipitation factors FPi are considered unknown and thus included in the sampling process. The total number of unknown parameters in this case is 16. In the second experiment, precipitation factors FPi are estimated a priori, by means of a long term water balance between observed discharge at the catchment outlet, evapotranspiration estimates and observed precipitation. In this case, the number of unknown parameters reduces to 11. The feasible ranges assigned to the precipitation factors in the first experiment are slightly wider than the range of fixed precipitation factors used in the second experiment. The mean squared error of the Box-Cox transformed discharge during the calibration period is used for the evaluation of the

  14. Long term high resolution rainfall runoff observations for improved water balance uncertainty and database QA-QC in the Walnut Gulch Experimental Watershed.

    Science.gov (United States)

    Bitew, M. M.; Goodrich, D. C.; Demaria, E.; Heilman, P.; Kautz, M. A.

    2017-12-01

    Walnut Gulch is a semi-arid environment experimental watershed and Long Term Agro-ecosystem Research (LTAR) site managed by USDA-ARS Southwest Watershed Research Center for which high-resolution long-term hydro-climatic data are available across its 150 km2 drainage area. In this study, we present the analysis of 50 years of continuous hourly rainfall data to evaluate runoff control and generation processes for improving the QA-QC plans of Walnut Gulch to create high-quality data set that is critical for reducing water balance uncertainties. Multiple linear regression models were developed to relate rainfall properties, runoff characteristics and watershed properties. The rainfall properties were summarized to event based total depth, maximum intensity, duration, the location of the storm center with respect to the outlet, and storm size normalized to watershed area. We evaluated the interaction between the runoff and rainfall and runoff as antecedent moisture condition (AMC), antecedent runoff condition (ARC) and, runoff depth and duration for each rainfall events. We summarized each of the watershed properties such as contributing area, slope, shape, channel length, stream density, channel flow area, and percent of the area of retention stock ponds for each of the nested catchments in Walnut Gulch. The evaluation of the model using basic and categorical statistics showed good predictive skill throughout the watersheds. The model produced correlation coefficients ranging from 0.4-0.94, Nash efficiency coefficients up to 0.77, and Kling-Gupta coefficients ranging from 0.4 to 0.98. The model predicted 92% of all runoff generations and 98% of no-runoff across all sub-watersheds in Walnut Gulch. The regression model also indicated good potential to complement the QA-QC procedures in place for Walnut Gulch dataset publications developed over the years since the 1960s through identification of inconsistencies in rainfall and runoff relations.

  15. Methodologies of Uncertainty Propagation Calculation

    International Nuclear Information System (INIS)

    Chojnacki, Eric

    2002-01-01

    After recalling the theoretical principle and the practical difficulties of the methodologies of uncertainty propagation calculation, the author discussed how to propagate input uncertainties. He said there were two kinds of input uncertainty: - variability: uncertainty due to heterogeneity, - lack of knowledge: uncertainty due to ignorance. It was therefore necessary to use two different propagation methods. He demonstrated this in a simple example which he generalised, treating the variability uncertainty by the probability theory and the lack of knowledge uncertainty by the fuzzy theory. He cautioned, however, against the systematic use of probability theory which may lead to unjustifiable and illegitimate precise answers. Mr Chojnacki's conclusions were that the importance of distinguishing variability and lack of knowledge increased as the problem was getting more and more complex in terms of number of parameters or time steps, and that it was necessary to develop uncertainty propagation methodologies combining probability theory and fuzzy theory

  16. Uncertainty analysis of 137Cs and 90Sr activity in borehole water from a waste disposal site

    International Nuclear Information System (INIS)

    Dafauti, Sunita; Pulhani, Vandana; Datta, D.; Hegde, A.G.

    2005-01-01

    Uncertainty quantification (UQ) is the quantitative characterization and use of uncertainty in experimental applications. There are two distinct types of uncertainty variability which can be quantified in principle using classical probability theory and lack of knowledge which requires more than classical probability theory for its quantification. Fuzzy set theory was applied to quantify the second type of uncertainty associated with the measurement of activity due to 137 Cs and 90 Sr present in bore-well water samples from a waste disposal site. The upper and lower limits of concentration were computed and it may be concluded from the analysis that the alpha cut technique of fuzzy set theory is a good nonprecise estimator of these types of bounds. (author)

  17. Stimulus uncertainty enhances long-term potentiation-like plasticity in human motor cortex.

    Science.gov (United States)

    Sale, Martin V; Nydam, Abbey S; Mattingley, Jason B

    2017-03-01

    Plasticity can be induced in human cortex using paired associative stimulation (PAS), which repeatedly and predictably pairs a peripheral electrical stimulus with transcranial magnetic stimulation (TMS) to the contralateral motor region. Many studies have reported small or inconsistent effects of PAS. Given that uncertain stimuli can promote learning, the predictable nature of the stimulation in conventional PAS paradigms might serve to attenuate plasticity induction. Here, we introduced stimulus uncertainty into the PAS paradigm to investigate if it can boost plasticity induction. Across two experimental sessions, participants (n = 28) received a modified PAS paradigm consisting of a random combination of 90 paired stimuli and 90 unpaired (TMS-only) stimuli. Prior to each of these stimuli, participants also received an auditory cue which either reliably predicted whether the upcoming stimulus was paired or unpaired (no uncertainty condition) or did not predict the upcoming stimulus (maximum uncertainty condition). Motor evoked potentials (MEPs) evoked from abductor pollicis brevis (APB) muscle quantified cortical excitability before and after PAS. MEP amplitude increased significantly 15 min following PAS in the maximum uncertainty condition. There was no reliable change in MEP amplitude in the no uncertainty condition, nor between post-PAS MEP amplitudes across the two conditions. These results suggest that stimulus uncertainty may provide a novel means to enhance plasticity induction with the PAS paradigm in human motor cortex. To provide further support to the notion that stimulus uncertainty and prediction error promote plasticity, future studies should further explore the time course of these changes, and investigate what aspects of stimulus uncertainty are critical in boosting plasticity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Uncertainty in estimating and mitigating industrial related GHG emissions

    International Nuclear Information System (INIS)

    El-Fadel, M.; Zeinati, M.; Ghaddar, N.; Mezher, T.

    2001-01-01

    Global climate change has been one of the challenging environmental concerns facing policy makers in the past decade. The characterization of the wide range of greenhouse gas emissions sources and sinks as well as their behavior in the atmosphere remains an on-going activity in many countries. Lebanon, being a signatory to the Framework Convention on Climate Change, is required to submit and regularly update a national inventory of greenhouse gas emissions sources and removals. Accordingly, an inventory of greenhouse gases from various sectors was conducted following the guidelines set by the United Nations Intergovernmental Panel on Climate Change (IPCC). The inventory indicated that the industrial sector contributes about 29% to the total greenhouse gas emissions divided between industrial processes and energy requirements at 12 and 17%, respectively. This paper describes major mitigation scenarios to reduce emissions from this sector based on associated technical, economic, environmental, and social characteristics. Economic ranking of these scenarios was conducted and uncertainty in emission factors used in the estimation process was emphasized. For this purpose, theoretical and experimental emission factors were used as alternatives to default factors recommended by the IPCC and the significance of resulting deviations in emission estimation is presented. (author)

  19. The state of the art of the impact of sampling uncertainty on measurement uncertainty

    Science.gov (United States)

    Leite, V. J.; Oliveira, E. C.

    2018-03-01

    The measurement uncertainty is a parameter that marks the reliability and can be divided into two large groups: sampling and analytical variations. Analytical uncertainty is a controlled process, performed in the laboratory. The same does not occur with the sampling uncertainty, which, because it faces several obstacles and there is no clarity on how to perform the procedures, has been neglected, although it is admittedly indispensable to the measurement process. This paper aims at describing the state of the art of sampling uncertainty and at assessing its relevance to measurement uncertainty.

  20. The Effect of Uncertainty Management Program on Quality of Life Among Vietnamese Women at 3 Weeks Postmastectomy.

    Science.gov (United States)

    Ha, Xuan Thi Nhu; Thanasilp, Sureeporn; Thato, Ratsiri

    2018-05-10

    In Vietnam, breast cancer is a top contributor to cancer-related deaths in women. Evidence shows that, after mastectomy, women in Vietnam have a lower quality of life than women in other countries. In addition, high uncertainty is a predictor of low quality of life postmastectomy. Therefore, if nurses can manage uncertainty, the quality of life postmastectomy can improve. This study examined the effect of the Uncertainty Management Program (UMP) on quality of life at 3 weeks postmastectomy in Vietnamese women. This research was a quasi-experimental study using a "posttest only with control group" design. There were 115 subjects assigned to either the experimental group (n = 57), who participated in the UMP and routine care, or the control group (n = 58), who received only routine care. Participants were assessed 2 times postmastectomy using the modified Quality of Life Index Scale-Vietnamese version. The experimental group exhibited low uncertainty before discharge and significantly higher quality of life than the control group at 1 and 3 weeks postmastectomy, respectively (P < .05). Women's physical well-being, psychological well-being, body image concerns, and social concerns were significantly increased with UMP. The UMP was considered as a promising program that might benefit the QoL of women with breast cancer 3 weeks postmastectomy. The UMP appears feasible to apply for women with breast cancer to improve their QoL postmastectomy in various settings. Nurses can flexibility instruct women in their holistic care attention both in the hospital and at home.

  1. Quantifying measurement uncertainty and spatial variability in the context of model evaluation

    Science.gov (United States)

    Choukulkar, A.; Brewer, A.; Pichugina, Y. L.; Bonin, T.; Banta, R. M.; Sandberg, S.; Weickmann, A. M.; Djalalova, I.; McCaffrey, K.; Bianco, L.; Wilczak, J. M.; Newman, J. F.; Draxl, C.; Lundquist, J. K.; Wharton, S.; Olson, J.; Kenyon, J.; Marquis, M.

    2017-12-01

    In an effort to improve wind forecasts for the wind energy sector, the Department of Energy and the NOAA funded the second Wind Forecast Improvement Project (WFIP2). As part of the WFIP2 field campaign, a large suite of in-situ and remote sensing instrumentation was deployed to the Columbia River Gorge in Oregon and Washington from October 2015 - March 2017. The array of instrumentation deployed included 915-MHz wind profiling radars, sodars, wind- profiling lidars, and scanning lidars. The role of these instruments was to provide wind measurements at high spatial and temporal resolution for model evaluation and improvement of model physics. To properly determine model errors, the uncertainties in instrument-model comparisons need to be quantified accurately. These uncertainties arise from several factors such as measurement uncertainty, spatial variability, and interpolation of model output to instrument locations, to name a few. In this presentation, we will introduce a formalism to quantify measurement uncertainty and spatial variability. The accuracy of this formalism will be tested using existing datasets such as the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign. Finally, the uncertainties in wind measurement and the spatial variability estimates from the WFIP2 field campaign will be discussed to understand the challenges involved in model evaluation.

  2. New challenges on uncertainty propagation assessment of flood risk analysis

    Science.gov (United States)

    Martins, Luciano; Aroca-Jiménez, Estefanía; Bodoque, José M.; Díez-Herrero, Andrés

    2016-04-01

    Natural hazards, such as floods, cause considerable damage to the human life, material and functional assets every year and around the World. Risk assessment procedures has associated a set of uncertainties, mainly of two types: natural, derived from stochastic character inherent in the flood process dynamics; and epistemic, that are associated with lack of knowledge or the bad procedures employed in the study of these processes. There are abundant scientific and technical literature on uncertainties estimation in each step of flood risk analysis (e.g. rainfall estimates, hydraulic modelling variables); but very few experience on the propagation of the uncertainties along the flood risk assessment. Therefore, epistemic uncertainties are the main goal of this work, in particular,understand the extension of the propagation of uncertainties throughout the process, starting with inundability studies until risk analysis, and how far does vary a proper analysis of the risk of flooding. These methodologies, such as Polynomial Chaos Theory (PCT), Method of Moments or Monte Carlo, are used to evaluate different sources of error, such as data records (precipitation gauges, flow gauges...), hydrologic and hydraulic modelling (inundation estimation), socio-demographic data (damage estimation) to evaluate the uncertainties propagation (UP) considered in design flood risk estimation both, in numerical and cartographic expression. In order to consider the total uncertainty and understand what factors are contributed most to the final uncertainty, we used the method of Polynomial Chaos Theory (PCT). It represents an interesting way to handle to inclusion of uncertainty in the modelling and simulation process. PCT allows for the development of a probabilistic model of the system in a deterministic setting. This is done by using random variables and polynomials to handle the effects of uncertainty. Method application results have a better robustness than traditional analysis

  3. Sensitivity and uncertainty analyses of the HCLL mock-up experiment

    International Nuclear Information System (INIS)

    Leichtle, D.; Fischer, U.; Kodeli, I.; Perel, R.L.; Klix, A.; Batistoni, P.; Villari, R.

    2010-01-01

    Within the European Fusion Technology Programme dedicated computational methods, tools and data have been developed and validated for sensitivity and uncertainty analyses of fusion neutronics experiments. The present paper is devoted to this kind of analyses on the recent neutronics experiment on a mock-up of the Helium-Cooled Lithium Lead Test Blanket Module for ITER at the Frascati neutron generator. They comprise both probabilistic and deterministic methodologies for the assessment of uncertainties of nuclear responses due to nuclear data uncertainties and their sensitivities to the involved reaction cross-section data. We have used MCNP and MCSEN codes in the Monte Carlo approach and DORT and SUSD3D in the deterministic approach for transport and sensitivity calculations, respectively. In both cases JEFF-3.1 and FENDL-2.1 libraries for the transport data and mainly ENDF/B-VI.8 and SCALE6.0 libraries for the relevant covariance data have been used. With a few exceptions, the two different methodological approaches were shown to provide consistent results. A total nuclear data related uncertainty in the range of 1-2% (1σ confidence level) was assessed for the tritium production in the HCLL mock-up experiment.

  4. Uncertainty quantification in capacitive RF MEMS switches

    Science.gov (United States)

    Pax, Benjamin J.

    Development of radio frequency micro electrical-mechanical systems (RF MEMS) has led to novel approaches to implement electrical circuitry. The introduction of capacitive MEMS switches, in particular, has shown promise in low-loss, low-power devices. However, the promise of MEMS switches has not yet been completely realized. RF-MEMS switches are known to fail after only a few months of operation, and nominally similar designs show wide variability in lifetime. Modeling switch operation using nominal or as-designed parameters cannot predict the statistical spread in the number of cycles to failure, and probabilistic methods are necessary. A Bayesian framework for calibration, validation and prediction offers an integrated approach to quantifying the uncertainty in predictions of MEMS switch performance. The objective of this thesis is to use the Bayesian framework to predict the creep-related deflection of the PRISM RF-MEMS switch over several thousand hours of operation. The PRISM switch used in this thesis is the focus of research at Purdue's PRISM center, and is a capacitive contacting RF-MEMS switch. It employs a fixed-fixed nickel membrane which is electrostatically actuated by applying voltage between the membrane and a pull-down electrode. Creep plays a central role in the reliability of this switch. The focus of this thesis is on the creep model, which is calibrated against experimental data measured for a frog-leg varactor fabricated and characterized at Purdue University. Creep plasticity is modeled using plate element theory with electrostatic forces being generated using either parallel plate approximations where appropriate, or solving for the full 3D potential field. For the latter, structure-electrostatics interaction is determined through immersed boundary method. A probabilistic framework using generalized polynomial chaos (gPC) is used to create surrogate models to mitigate the costly full physics simulations, and Bayesian calibration and forward

  5. Uncertainties in hydrogen combustion

    International Nuclear Information System (INIS)

    Stamps, D.W.; Wong, C.C.; Nelson, L.S.

    1988-01-01

    Three important areas of hydrogen combustion with uncertainties are identified: high-temperature combustion, flame acceleration and deflagration-to-detonation transition, and aerosol resuspension during hydrogen combustion. The uncertainties associated with high-temperature combustion may affect at least three different accident scenarios: the in-cavity oxidation of combustible gases produced by core-concrete interactions, the direct containment heating hydrogen problem, and the possibility of local detonations. How these uncertainties may affect the sequence of various accident scenarios is discussed and recommendations are made to reduce these uncertainties. 40 references

  6. Uncertainty in artificial intelligence

    CERN Document Server

    Kanal, LN

    1986-01-01

    How to deal with uncertainty is a subject of much controversy in Artificial Intelligence. This volume brings together a wide range of perspectives on uncertainty, many of the contributors being the principal proponents in the controversy.Some of the notable issues which emerge from these papers revolve around an interval-based calculus of uncertainty, the Dempster-Shafer Theory, and probability as the best numeric model for uncertainty. There remain strong dissenting opinions not only about probability but even about the utility of any numeric method in this context.

  7. Statistically based uncertainty analysis for ranking of component importance in the thermal-hydraulic safety analysis of the Advanced Neutron Source Reactor

    International Nuclear Information System (INIS)

    Wilson, G.E.

    1992-01-01

    The Analytic Hierarchy Process (AHP) has been used to help determine the importance of components and phenomena in thermal-hydraulic safety analyses of nuclear reactors. The AHP results are based, in part on expert opinion. Therefore, it is prudent to evaluate the uncertainty of the AHP ranks of importance. Prior applications have addressed uncertainty with experimental data comparisons and bounding sensitivity calculations. These methods work well when a sufficient experimental data base exists to justify the comparisons. However, in the case of limited or no experimental data the size of the uncertainty is normally made conservatively large. Accordingly, the author has taken another approach, that of performing a statistically based uncertainty analysis. The new work is based on prior evaluations of the importance of components and phenomena in the thermal-hydraulic safety analysis of the Advanced Neutron Source Reactor (ANSR), a new facility now in the design phase. The uncertainty during large break loss of coolant, and decay heat removal scenarios is estimated by assigning a probability distribution function (pdf) to the potential error in the initial expert estimates of pair-wise importance between the components. Using a Monte Carlo sampling technique, the error pdfs are propagated through the AHP software solutions to determine a pdf of uncertainty in the system wide importance of each component. To enhance the generality of the results, study of one other problem having different number of elements is reported, as are the effects of a larger assumed pdf error in the expert ranks. Validation of the Monte Carlo sample size and repeatability are also documented

  8. Propagating Water Quality Analysis Uncertainty Into Resource Management Decisions Through Probabilistic Modeling

    Science.gov (United States)

    Gronewold, A. D.; Wolpert, R. L.; Reckhow, K. H.

    2007-12-01

    Most probable number (MPN) and colony-forming-unit (CFU) are two estimates of fecal coliform bacteria concentration commonly used as measures of water quality in United States shellfish harvesting waters. The MPN is the maximum likelihood estimate (or MLE) of the true fecal coliform concentration based on counts of non-sterile tubes in serial dilution of a sample aliquot, indicating bacterial metabolic activity. The CFU is the MLE of the true fecal coliform concentration based on the number of bacteria colonies emerging on a growth plate after inoculation from a sample aliquot. Each estimating procedure has intrinsic variability and is subject to additional uncertainty arising from minor variations in experimental protocol. Several versions of each procedure (using different sized aliquots or different numbers of tubes, for example) are in common use, each with its own levels of probabilistic and experimental error and uncertainty. It has been observed empirically that the MPN procedure is more variable than the CFU procedure, and that MPN estimates are somewhat higher on average than CFU estimates, on split samples from the same water bodies. We construct a probabilistic model that provides a clear theoretical explanation for the observed variability in, and discrepancy between, MPN and CFU measurements. We then explore how this variability and uncertainty might propagate into shellfish harvesting area management decisions through a two-phased modeling strategy. First, we apply our probabilistic model in a simulation-based analysis of future water quality standard violation frequencies under alternative land use scenarios, such as those evaluated under guidelines of the total maximum daily load (TMDL) program. Second, we apply our model to water quality data from shellfish harvesting areas which at present are closed (either conditionally or permanently) to shellfishing, to determine if alternative laboratory analysis procedures might have led to different

  9. Criteria of the validation of experimental and evaluated covariance data

    International Nuclear Information System (INIS)

    Badikov, S.

    2008-01-01

    The criteria of the validation of experimental and evaluated covariance data are reviewed. In particular: a) the criterion of the positive definiteness for covariance matrices, b) the relationship between the 'integral' experimental and estimated uncertainties, c) the validity of the statistical invariants, d) the restrictions imposed to correlations between experimental errors, are described. Applying these criteria in nuclear data evaluation was considered and 4 particular points have been examined. First preserving positive definiteness of covariance matrices in case of arbitrary transformation of a random vector was considered, properties of the covariance matrices in operations widely used in neutron and reactor physics (splitting and collapsing energy groups, averaging the physical values over energy groups, estimation parameters on the basis of measurements by means of generalized least squares method) were studied. Secondly, an algorithm for comparison of experimental and estimated 'integral' uncertainties was developed, square root of determinant of a covariance matrix is recommended for use in nuclear data evaluation as a measure of 'integral' uncertainty for vectors of experimental and estimated values. Thirdly, a set of statistical invariants-values which are preserved in statistical processing was presented. And fourthly, the inequality that signals a correlation between experimental errors that leads to unphysical values is given. An application is given concerning the cross-section of the (n,t) reaction on Li 6 with a neutron incident energy comprised between 1 and 100 keV

  10. Uncertainty in projected climate change arising from uncertain fossil-fuel emission factors

    Science.gov (United States)

    Quilcaille, Y.; Gasser, T.; Ciais, P.; Lecocq, F.; Janssens-Maenhout, G.; Mohr, S.

    2018-04-01

    Emission inventories are widely used by the climate community, but their uncertainties are rarely accounted for. In this study, we evaluate the uncertainty in projected climate change induced by uncertainties in fossil-fuel emissions, accounting for non-CO2 species co-emitted with the combustion of fossil-fuels and their use in industrial processes. Using consistent historical reconstructions and three contrasted future projections of fossil-fuel extraction from Mohr et al we calculate CO2 emissions and their uncertainties stemming from estimates of fuel carbon content, net calorific value and oxidation fraction. Our historical reconstructions of fossil-fuel CO2 emissions are consistent with other inventories in terms of average and range. The uncertainties sum up to a ±15% relative uncertainty in cumulative CO2 emissions by 2300. Uncertainties in the emissions of non-CO2 species associated with the use of fossil fuels are estimated using co-emission ratios varying with time. Using these inputs, we use the compact Earth system model OSCAR v2.2 and a Monte Carlo setup, in order to attribute the uncertainty in projected global surface temperature change (ΔT) to three sources of uncertainty, namely on the Earth system’s response, on fossil-fuel CO2 emission and on non-CO2 co-emissions. Under the three future fuel extraction scenarios, we simulate the median ΔT to be 1.9, 2.7 or 4.0 °C in 2300, with an associated 90% confidence interval of about 65%, 52% and 42%. We show that virtually all of the total uncertainty is attributable to the uncertainty in the future Earth system’s response to the anthropogenic perturbation. We conclude that the uncertainty in emission estimates can be neglected for global temperature projections in the face of the large uncertainty in the Earth system response to the forcing of emissions. We show that this result does not hold for all variables of the climate system, such as the atmospheric partial pressure of CO2 and the

  11. An analysis of combined standard uncertainty for radiochemical measurements of environmental samples

    International Nuclear Information System (INIS)

    Berne, A.

    1996-01-01

    It is anticipated that future data acquisitions intended for use in radiological risk assessments will require the incorporation of uncertainty analysis. Often, only one aliquot of the sample is taken and a single determination is made. Under these circumstances, the total uncertainty is calculated using the open-quotes propagation of errorsclose quotes approach. However, there is no agreement in the radioanalytical community as to the exact equations to use. The Quality Assurance/Metrology Division of the Environmental Measurements Laboratory has developed a systematic process to compute uncertainties in constituent components of the analytical procedure, as well as the combined standard uncertainty (CSU). The equations for computation are presented here, with examples of their use. They have also been incorporated into a code for use in the spreadsheet application, QuattroPro trademark. Using the spreadsheet with appropriate inputs permits an analysis of the variations in the CSU as a function of several different variables. The relative importance of the open-quotes counting uncertaintyclose quotes can also be ascertained

  12. NIS method for uncertainty estimation of airborne sound insulation measurement in field

    Directory of Open Access Journals (Sweden)

    El-Basheer Tarek M.

    2017-01-01

    Full Text Available In structures, airborne sound insulation is utilized to characterize the acoustic nature of barriers between rooms. However, the assessment of sound insulation index is once in a while troublesome or indeed, even questionable, both in field and laboratory measurements, notwithstanding the way that there are some unified measurement methodology indicated in the ISO 140 series standards. There are issues with the reproducibility and repeatability of the measurement results. A few troubles might be brought on by non-diffuse acoustic fields, non-uniform reverberation time, or blunders of the reverberation time measurements. Some minor issues are additionally postured by flanking transmission. In this paper, investigation of the uncertainties of the above specified measurement parts and their impact on the consolidated uncertainty in 1/3-octave frequency band. The total measurement uncertainty model contributes several different partial uncertainties, which are evaluated by the method of type A or type B. Also, the determination of the sound reduction index decided by ISO 140-4 has been performed.

  13. A real-time uncertainty-knowledge and training database

    International Nuclear Information System (INIS)

    Joergensen, H.E.; Santabarbara, J.M.; Mikkelsen, T.

    1993-01-01

    The paper describes an experimentally obtained database for training of uncertainties and data interpretation in connection with local scale accidental atmospheric dispersion scenarios. Based on remote measurement techniques using lidars, sequential 'snapshots', or movies, of the fluctuating concentration, profiles during several full scale diffusion experiments have been obtained. The aim has been to establish data sets suitable for comparison and training with the real-time atmospheric dispersion models in decision support systems, such as the RODOS system under development within the CEC. (author)

  14. A real-time uncertainty-knowledge and training database

    DEFF Research Database (Denmark)

    Ejsing Jørgensen, Hans; Santabarbara, J.M.; Mikkelsen, T.

    1993-01-01

    The paper describes an experimentally obtained database for training of uncertainties and data interpretation in connection with local scale accidental atmospheric dispersion scenarios. Based on remote measurement techniques using lidars, sequential 'snapshots', or movies. of the fluctuating...... concentration profiles during several full scale diffusion experiments has been obtained. The aim has been to establish data sets suitable tor comparison and training with the real-time atmospheric dispersion models in decision support systems, such as the RODOS system under development within the CEC....

  15. PENGARUH KETIDAKPASTIAN LINGKUNGAN DAN STRATEGI BISNIS TERHADAP SISTEM INFOMASI AKUNTANSI MANAJEMEN DENGAN TOTAL QUALITY MANAGEMENT SEBAGAI VARIABEL INTERVENING

    Directory of Open Access Journals (Sweden)

    uum helmina chaerunisak

    2017-10-01

    Full Text Available ABSTRACT   This study aims to test the influence of uncertainty the environment and business strategy against the system information accounting management with a total quality management (TQM as variable no intervening. This research consisting of two exogenous, one variable endogenous, and one variable no intervening. Exogen variable in this research was uncertainty the environment and business strategy measured by indicators of items questions in the questionnaire.Variable endogenous in this research was information system accounting management.Variable no intervening is TQM.   This study used purposive sampling. Used as a sample of 60 respondents in the motor vehicle trade service company in DIY. Research data analysis using Structural Equation Model - Partial Least Square (PLS-SEM with WARP-PLS. Based on the results of the analysis indicate that the first hypothesis testing uncertainties affect the TQM environment. The second hypothesis does not affect the business strategy of TQM. The third hypothesis TQM influence on management accounting information systems. The fourth hypothesis environmental uncertainty influence on management accounting information system. Hypothesis fifth business strategy does not affect the accounting information system management. Environmental uncertainties sixth hypothesis indirectly significant effect on management accounting information systems to be mediated in part (partial by the Total Quality Management (TQM. Hypothesis seventh business strategy does not significantly influence the management accounting information system of Total Quality Management (TQM does not act as a mediator.   Keywords: environmental uncertainty, business strategy, Total Quality Management, management accounting information system

  16. Ecosystem Services Mapping Uncertainty Assessment: A Case Study in the Fitzroy Basin Mining Region

    Directory of Open Access Journals (Sweden)

    Zhenyu Wang

    2018-01-01

    Full Text Available Ecosystem services mapping is becoming increasingly popular through the use of various readily available mapping tools, however, uncertainties in assessment outputs are commonly ignored. Uncertainties from different sources have the potential to lower the accuracy of mapping outputs and reduce their reliability for decision-making. Using a case study in an Australian mining region, this paper assessed the impact of uncertainties on the modelling of the hydrological ecosystem service, water provision. Three types of uncertainty were modelled using multiple uncertainty scenarios: (1 spatial data sources; (2 modelling scales (temporal and spatial and (3 parameterization and model selection. We found that the mapping scales can induce significant changes to the spatial pattern of outputs and annual totals of water provision. In addition, differences in parameterization using differing sources from the literature also led to obvious differences in base flow. However, the impact of each uncertainty associated with differences in spatial data sources were not so great. The results of this study demonstrate the importance of uncertainty assessment and highlight that any conclusions drawn from ecosystem services mapping, such as the impacts of mining, are likely to also be a property of the uncertainty in ecosystem services mapping methods.

  17. Dynamic interactions between hydrogeological and exposure parameters in daily dose prediction under uncertainty and temporal variability

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Vikas, E-mail: vikas.kumar@urv.cat [Department of Chemical Engineering, Rovira i Virgili University, Tarragona 43007 (Spain); Barros, Felipe P.J. de [Sonny Astani Department of Civil and Environmental Engineering, University of Southern California, Los Angeles 90089, CA (United States); Schuhmacher, Marta [Department of Chemical Engineering, Rovira i Virgili University, Tarragona 43007 (Spain); Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier [Hydrogeology Group, Department of Geotechnical Engineering and Geosciences, University Politècnica de Catalunya-BarcelonaTech, Barcelona 08034 (Spain)

    2013-12-15

    Highlights: • Dynamic parametric interaction in daily dose prediction under uncertainty. • Importance of temporal dynamics associated with the dose. • Different dose experienced by different population cohorts as a function of time. • Relevance of uncertainty reduction in the input parameters shows temporal dynamism. -- Abstract: We study the time dependent interaction between hydrogeological and exposure parameters in daily dose predictions due to exposure of humans to groundwater contamination. Dose predictions are treated stochastically to account for an incomplete hydrogeological and geochemical field characterization, and an incomplete knowledge of the physiological response. We used a nested Monte Carlo framework to account for uncertainty and variability arising from both hydrogeological and exposure variables. Our interest is in the temporal dynamics of the total dose and their effects on parametric uncertainty reduction. We illustrate the approach to a HCH (lindane) pollution problem at the Ebro River, Spain. The temporal distribution of lindane in the river water can have a strong impact in the evaluation of risk. The total dose displays a non-linear effect on different population cohorts, indicating the need to account for population variability. We then expand the concept of Comparative Information Yield Curves developed earlier (see de Barros et al. [29]) to evaluate parametric uncertainty reduction under temporally variable exposure dose. Results show that the importance of parametric uncertainty reduction varies according to the temporal dynamics of the lindane plume. The approach could be used for any chemical to aid decision makers to better allocate resources towards reducing uncertainty.

  18. Expanding Uncertainty Principle to Certainty-Uncertainty Principles with Neutrosophy and Quad-stage Method

    Directory of Open Access Journals (Sweden)

    Fu Yuhua

    2015-03-01

    Full Text Available The most famous contribution of Heisenberg is uncertainty principle. But the original uncertainty principle is improper. Considering all the possible situations (including the case that people can create laws and applying Neutrosophy and Quad-stage Method, this paper presents "certainty-uncertainty principles" with general form and variable dimension fractal form. According to the classification of Neutrosophy, "certainty-uncertainty principles" can be divided into three principles in different conditions: "certainty principle", namely a particle’s position and momentum can be known simultaneously; "uncertainty principle", namely a particle’s position and momentum cannot be known simultaneously; and neutral (fuzzy "indeterminacy principle", namely whether or not a particle’s position and momentum can be known simultaneously is undetermined. The special cases of "certain ty-uncertainty principles" include the original uncertainty principle and Ozawa inequality. In addition, in accordance with the original uncertainty principle, discussing high-speed particle’s speed and track with Newton mechanics is unreasonable; but according to "certaintyuncertainty principles", Newton mechanics can be used to discuss the problem of gravitational defection of a photon orbit around the Sun (it gives the same result of deflection angle as given by general relativity. Finally, for the reason that in physics the principles, laws and the like that are regardless of the principle (law of conservation of energy may be invalid; therefore "certaintyuncertainty principles" should be restricted (or constrained by principle (law of conservation of energy, and thus it can satisfy the principle (law of conservation of energy.

  19. Uncertainty enabled Sensor Observation Services

    Science.gov (United States)

    Cornford, Dan; Williams, Matthew; Bastin, Lucy

    2010-05-01

    Almost all observations of reality are contaminated with errors, which introduce uncertainties into the actual observation result. Such uncertainty is often held to be a data quality issue, and quantification of this uncertainty is essential for the principled exploitation of the observations. Many existing systems treat data quality in a relatively ad-hoc manner, however if the observation uncertainty is a reliable estimate of the error on the observation with respect to reality then knowledge of this uncertainty enables optimal exploitation of the observations in further processes, or decision making. We would argue that the most natural formalism for expressing uncertainty is Bayesian probability theory. In this work we show how the Open Geospatial Consortium Sensor Observation Service can be implemented to enable the support of explicit uncertainty about observations. We show how the UncertML candidate standard is used to provide a rich and flexible representation of uncertainty in this context. We illustrate this on a data set of user contributed weather data where the INTAMAP interpolation Web Processing Service is used to help estimate the uncertainty on the observations of unknown quality, using observations with known uncertainty properties. We then go on to discuss the implications of uncertainty for a range of existing Open Geospatial Consortium standards including SWE common and Observations and Measurements. We discuss the difficult decisions in the design of the UncertML schema and its relation and usage within existing standards and show various options. We conclude with some indications of the likely future directions for UncertML in the context of Open Geospatial Consortium services.

  20. Monte Carlo simulation of the nTOF Total Absorption Calorimeter

    International Nuclear Information System (INIS)

    Guerrero, C.; Cano-Ott, D.; Mendoza, E.; Taín, J.L.; Algora, A.; Berthoumieux, E.; Colonna, N.; Domingo-Pardo, C.; González-Romero, E.; Heil, M.; Jordán, D.; Käppeler, F.; Lampoudis, C.; Martínez, T.; Massimi, C.; Plag, R.

    2012-01-01

    The n T OF Total Absorption Calorimeter (TAC) is a 4π BaF 2 segmented detector used at CERN for measuring neutron capture cross-sections of importance for the design of advanced nuclear reactors. This work presents the simulation code that has been developed in GEANT4 for the accurate determination of the detection efficiency of the TAC for neutron capture events. The code allows to calculate the efficiency of the TAC for every neutron capture state, as a function of energy, crystal multiplicity, and counting rate. The code includes all instrumental effects such as the single crystal detection threshold and energy resolution, finite size of the coincidence time window, and signal pile-up. The results from the simulation have been validated with experimental data for a large set of electromagnetic de-excitation patterns: β-decay of well known calibration sources, neutron capture reactions in light nuclei with well known level schemes like nat Ti, reference samples used in (n,γ) measurements like 197 Au and experimental data from an actinide sample like 240 Pu. The systematic uncertainty in the determination of the detection efficiency has been estimated for all the cases. As a representative example, the accuracy reached for the case of 197 Au(n,γ) ranges between 0.5% and 2%, depending on the experimental and analysis conditions. Such a value matches the high accuracy required for the nuclear cross-section data needed in advanced reactor design.

  1. Assessing scenario and parametric uncertainties in risk analysis: a model uncertainty audit

    International Nuclear Information System (INIS)

    Tarantola, S.; Saltelli, A.; Draper, D.

    1999-01-01

    In the present study a process of model audit is addressed on a computational model used for predicting maximum radiological doses to humans in the field of nuclear waste disposal. Global uncertainty and sensitivity analyses are employed to assess output uncertainty and to quantify the contribution of parametric and scenario uncertainties to the model output. These tools are of fundamental importance for risk analysis and decision making purposes

  2. Type-A Worst-Case Uncertainty for Gaussian noise instruments

    International Nuclear Information System (INIS)

    Arpaia, P.; Baccigalupi, C.; Martino, M.

    2015-01-01

    An analytical type-A approach is proposed for predicting the Worst-Case Uncertainty of a measurement system. In a set of independent observations of the same measurand, modelled as independent- and identically-distributed random variables, the upcoming extreme values (e.g. peaks) can be forecast by only characterizing the measurement system noise level, assumed to be white and Gaussian. Simulation and experimental results are presented to validate the model for a case study on the worst-case repeatability of a pulsed power supply for the klystron modulators of the Compact LInear Collider at CERN. The experimental validation highlights satisfying results for an acquisition system repeatable in the order of ±25 ppm over a bandwidth of 5 MHz

  3. Type-A Worst-Case Uncertainty for Gaussian noise instruments

    CERN Document Server

    AUTHOR|(CDS)2087245; Arpaia, Pasquale; Martino, Michele

    2015-01-01

    An analytical type-A approach is proposed for predicting the Worst-Case Uncertainty of a measurement system. In a set of independent observations of the same measurand, modelled as independent- and identically-distributed random variables, the upcoming extreme values (e.g. peaks) can be forecast by only characterizing the measurement system noise level, assumed to be white and Gaussian. Simulation and experimental results are presented to validate the model for a case study on the worst-case repeatability of a pulsed power supply for the klystron modulators of the Compact LInear Collider at CERN. The experimental validation highlights satisfying results for an acquisition system repeatable in the order of _25 ppm over a bandwidth of 5 MHz.

  4. Uncertainty Communication. Issues and good practice

    International Nuclear Information System (INIS)

    Kloprogge, P.; Van der Sluijs, J.; Wardekker, A.

    2007-12-01

    In 2003 the Netherlands Environmental Assessment Agency (MNP) published the RIVM/MNP Guidance for Uncertainty Assessment and Communication. The Guidance assists in dealing with uncertainty in environmental assessments. Dealing with uncertainty is essential because assessment results regarding complex environmental issues are of limited value if the uncertainties have not been taken into account adequately. A careful analysis of uncertainties in an environmental assessment is required, but even more important is the effective communication of these uncertainties in the presentation of assessment results. The Guidance yields rich and differentiated insights in uncertainty, but the relevance of this uncertainty information may vary across audiences and uses of assessment results. Therefore, the reporting of uncertainties is one of the six key issues that is addressed in the Guidance. In practice, users of the Guidance felt a need for more practical assistance in the reporting of uncertainty information. This report explores the issue of uncertainty communication in more detail, and contains more detailed guidance on the communication of uncertainty. In order to make this a 'stand alone' document several questions that are mentioned in the detailed Guidance have been repeated here. This document thus has some overlap with the detailed Guidance. Part 1 gives a general introduction to the issue of communicating uncertainty information. It offers guidelines for (fine)tuning the communication to the intended audiences and context of a report, discusses how readers of a report tend to handle uncertainty information, and ends with a list of criteria that uncertainty communication needs to meet to increase its effectiveness. Part 2 helps writers to analyze the context in which communication takes place, and helps to map the audiences, and their information needs. It further helps to reflect upon anticipated uses and possible impacts of the uncertainty information on the

  5. Measurement of the total cross section from elastic scattering in pp collisions at s=8 TeV with the ATLAS detector

    Directory of Open Access Journals (Sweden)

    M. Aaboud

    2016-10-01

    Full Text Available A measurement of the total pp cross section at the LHC at s=8 TeV is presented. An integrated luminosity of 500 μb−1 was accumulated in a special run with high-β⋆ beam optics to measure the differential elastic cross section as a function of the Mandelstam momentum transfer variable t. The measurement is performed with the ALFA sub-detector of ATLAS. Using a fit to the differential elastic cross section in the −t range from 0.014 GeV2 to 0.1 GeV2 to extrapolate t→0, the total cross section, σtot(pp→X, is measured via the optical theorem to beσtot(pp→X=96.07±0.18(stat.±0.85(exp.±0.31(extr.mb, where the first error is statistical, the second accounts for all experimental systematic uncertainties and the last is related to uncertainties in the extrapolation t→0. In addition, the slope of the exponential function describing the elastic cross section at small t is determined to be B=19.74±0.05(stat.±0.23(syst.GeV−2.

  6. Reliability analysis under epistemic uncertainty

    International Nuclear Information System (INIS)

    Nannapaneni, Saideep; Mahadevan, Sankaran

    2016-01-01

    This paper proposes a probabilistic framework to include both aleatory and epistemic uncertainty within model-based reliability estimation of engineering systems for individual limit states. Epistemic uncertainty is considered due to both data and model sources. Sparse point and/or interval data regarding the input random variables leads to uncertainty regarding their distribution types, distribution parameters, and correlations; this statistical uncertainty is included in the reliability analysis through a combination of likelihood-based representation, Bayesian hypothesis testing, and Bayesian model averaging techniques. Model errors, which include numerical solution errors and model form errors, are quantified through Gaussian process models and included in the reliability analysis. The probability integral transform is used to develop an auxiliary variable approach that facilitates a single-level representation of both aleatory and epistemic uncertainty. This strategy results in an efficient single-loop implementation of Monte Carlo simulation (MCS) and FORM/SORM techniques for reliability estimation under both aleatory and epistemic uncertainty. Two engineering examples are used to demonstrate the proposed methodology. - Highlights: • Epistemic uncertainty due to data and model included in reliability analysis. • A novel FORM-based approach proposed to include aleatory and epistemic uncertainty. • A single-loop Monte Carlo approach proposed to include both types of uncertainties. • Two engineering examples used for illustration.

  7. Uncertainty quantification applied to the radiological characterization of radioactive waste.

    Science.gov (United States)

    Zaffora, B; Magistris, M; Saporta, G; Chevalier, J-P

    2017-09-01

    This paper describes the process adopted at the European Organization for Nuclear Research (CERN) to quantify uncertainties affecting the characterization of very-low-level radioactive waste. Radioactive waste is a by-product of the operation of high-energy particle accelerators. Radioactive waste must be characterized to ensure its safe disposal in final repositories. Characterizing radioactive waste means establishing the list of radionuclides together with their activities. The estimated activity levels are compared to the limits given by the national authority of the waste disposal. The quantification of the uncertainty affecting the concentration of the radionuclides is therefore essential to estimate the acceptability of the waste in the final repository but also to control the sorting, volume reduction and packaging phases of the characterization process. The characterization method consists of estimating the activity of produced radionuclides either by experimental methods or statistical approaches. The uncertainties are estimated using classical statistical methods and uncertainty propagation. A mixed multivariate random vector is built to generate random input parameters for the activity calculations. The random vector is a robust tool to account for the unknown radiological history of legacy waste. This analytical technique is also particularly useful to generate random chemical compositions of materials when the trace element concentrations are not available or cannot be measured. The methodology was validated using a waste population of legacy copper activated at CERN. The methodology introduced here represents a first approach for the uncertainty quantification (UQ) of the characterization process of waste produced at particle accelerators. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Evaluating prediction uncertainty

    International Nuclear Information System (INIS)

    McKay, M.D.

    1995-03-01

    The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented

  9. Experimental Test Plan DOE Tidal and River Reference Turbines

    Energy Technology Data Exchange (ETDEWEB)

    Neary, Vincent S [ORNL; Hill, Craig [St. Anthony Falls Laboratory, 2 Third Avenue SE, Minneapolis, MN 55414; Chamorro, Leonardo [St. Anthony Falls Laboratory, 2 Third Avenue SE, Minneapolis, MN 55414; Gunawan, Budi [ORNL

    2012-09-01

    Our aim is to provide details of the experimental test plan for scaled model studies in St. Anthony Falls Laboratory (SAFL) Main Channel at the University of Minnesota, including a review of study objectives, descriptions of the turbine models, the experimental set-up, instrumentation details, instrument measurement uncertainty, anticipated experimental test cases, post-processing methods, and data archiving for model developers.

  10. Uncertainty and sensitivity analysis of the nuclear fuel thermal behavior

    Energy Technology Data Exchange (ETDEWEB)

    Boulore, A., E-mail: antoine.boulore@cea.fr [Commissariat a l' Energie Atomique (CEA), DEN, Fuel Research Department, 13108 Saint-Paul-lez-Durance (France); Struzik, C. [Commissariat a l' Energie Atomique (CEA), DEN, Fuel Research Department, 13108 Saint-Paul-lez-Durance (France); Gaudier, F. [Commissariat a l' Energie Atomique (CEA), DEN, Systems and Structure Modeling Department, 91191 Gif-sur-Yvette (France)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer A complete quantitative method for uncertainty propagation and sensitivity analysis is applied. Black-Right-Pointing-Pointer The thermal conductivity of UO{sub 2} is modeled as a random variable. Black-Right-Pointing-Pointer The first source of uncertainty is the linear heat rate. Black-Right-Pointing-Pointer The second source of uncertainty is the thermal conductivity of the fuel. - Abstract: In the global framework of nuclear fuel behavior simulation, the response of the models describing the physical phenomena occurring during the irradiation in reactor is mainly conditioned by the confidence in the calculated temperature of the fuel. Amongst all parameters influencing the temperature calculation in our fuel rod simulation code (METEOR V2), several sources of uncertainty have been identified as being the most sensitive: thermal conductivity of UO{sub 2}, radial distribution of power in the fuel pellet, local linear heat rate in the fuel rod, geometry of the pellet and thermal transfer in the gap. Expert judgment and inverse methods have been used to model the uncertainty of these parameters using theoretical distributions and correlation matrices. Propagation of these uncertainties in the METEOR V2 code using the URANIE framework and a Monte-Carlo technique has been performed in different experimental irradiations of UO{sub 2} fuel. At every time step of the simulated experiments, we get a temperature statistical distribution which results from the initial distributions of the uncertain parameters. We then can estimate confidence intervals of the calculated temperature. In order to quantify the sensitivity of the calculated temperature to each of the uncertain input parameters and data, we have also performed a sensitivity analysis using the Sobol' indices at first order.

  11. Oil price uncertainty in Canada

    Energy Technology Data Exchange (ETDEWEB)

    Elder, John [Department of Finance and Real Estate, 1272 Campus Delivery, Colorado State University, Fort Collins, CO 80523 (United States); Serletis, Apostolos [Department of Economics, University of Calgary, Calgary, Alberta (Canada)

    2009-11-15

    Bernanke [Bernanke, Ben S. Irreversibility, uncertainty, and cyclical investment. Quarterly Journal of Economics 98 (1983), 85-106.] shows how uncertainty about energy prices may induce optimizing firms to postpone investment decisions, thereby leading to a decline in aggregate output. Elder and Serletis [Elder, John and Serletis, Apostolos. Oil price uncertainty.] find empirical evidence that uncertainty about oil prices has tended to depress investment in the United States. In this paper we assess the robustness of these results by investigating the effects of oil price uncertainty in Canada. Our results are remarkably similar to existing results for the United States, providing additional evidence that uncertainty about oil prices may provide another explanation for why the sharp oil price declines of 1985 failed to produce rapid output growth. Impulse-response analysis suggests that uncertainty about oil prices may tend to reinforce the negative response of output to positive oil shocks. (author)

  12. Efficient uncertainty quantification of a fully nonlinear and dispersive water wave model with random inputs

    DEFF Research Database (Denmark)

    Bigoni, Daniele; Engsig-Karup, Allan Peter; Eskilsson, Claes

    2016-01-01

    A major challenge in next-generation industrial applications is to improve numerical analysis by quantifying uncertainties in predictions. In this work we present a formulation of a fully nonlinear and dispersive potential flow water wave model with random inputs for the probabilistic description...... at different points in the parameter space, allowing for the reuse of existing simulation software. The choice of the applied methods is driven by the number of uncertain input parameters and by the fact that finding the solution of the considered model is computationally intensive. We revisit experimental...... benchmarks often used for validation of deterministic water wave models. Based on numerical experiments and assumed uncertainties in boundary data, our analysis reveals that some of the known discrepancies from deterministic simulation in comparison with experimental measurements could be partially explained...

  13. Experimental approach for the uncertainty assessment of 3D complex geometry dimensional measurements using computed tomography at the mm and sub-mm scales

    DEFF Research Database (Denmark)

    Jiménez, Roberto; Torralba, Marta; Yagüe-Fabra, José A.

    2017-01-01

    The dimensional verification of miniaturized components with 3D complex geometries is particularly challenging. Computed Tomography (CT) can represent a suitable alternative solution to micro metrology tools based on optical and tactile techniques. However, the establishment of CT systems......’ traceability when measuring 3D complex geometries is still an open issue. In this work, an alternative method for the measurement uncertainty assessment of 3D complex geometries by using CT is presented. The method is based on the micro-CT system Maximum Permissible Error (MPE) estimation, determined...... experimentally by using several calibrated reference artefacts. The main advantage of the presented method is that a previous calibration of the component by a more accurate Coordinate Measuring System (CMS) is not needed. In fact, such CMS would still hold all the typical limitations of optical and tactile...

  14. Effect of activation cross-section uncertainties on the radiological assessment of the MFE/DEMO first wall

    Energy Technology Data Exchange (ETDEWEB)

    Cabellos, O. [Instituto de Fusion Nuclear, Universidad Politecnica de Madrid, Madrid (Spain)]. E-mail: cabellos@din.upm.es; Reyes, S. [Lawrence Livermore National Laboratory, Livermore, CA (United States); Sanz, J. [Instituto de Fusion Nuclear, Universidad Politecnica de Madrid, Madrid (Spain); University Nacional Educacion a Distancia, Dep. Ingenieria Energetica, Juan del Rosal 12, 28040 Madrid (Spain); Rodriguez, A. [University Nacional Educacion a Distancia, Dep. Ingenieria Energetica, Juan del Rosal 12, 28040 Madrid (Spain); Youssef, M. [University of California, Los Angeles, CA (United States); Sawan, M. [University of Wisconsin, Madison, WI (United States)

    2006-02-15

    A Monte Carlo procedure has been applied in this work in order to address the impact of activation cross-sections (XS) uncertainties on contact dose rate and decay heat calculations for the outboard first wall (FW) of a magnetic fusion energy (MFE) demonstration (DEMO) reactor. The XSs inducing the major uncertainty in the prediction of activation related quantities have been identified. Results have shown that for times corresponding to maintenance activities the uncertainties effect is insignificant since the dominant XSs involved in these calculations are based on accurate experimental data evaluations. However, for times corresponding to waste management/recycling activities, the errors induced by the XSs uncertainties, which in this case are evaluated using systematic models, must be considered. It has been found that two particular isotopes, {sup 6}Co and {sup 94}Nb, are key contributors to the global DEMO FW activation uncertainty results. In these cases, the benefit from further improvements in the accuracy of the critical reaction XSs is discussed.

  15. Effect of activation cross-section uncertainties on the radiological assessment of the MFE/DEMO first wall

    International Nuclear Information System (INIS)

    Cabellos, O.; Reyes, S.; Sanz, J.; Rodriguez, A.; Youssef, M.; Sawan, M.

    2006-01-01

    A Monte Carlo procedure has been applied in this work in order to address the impact of activation cross-sections (XS) uncertainties on contact dose rate and decay heat calculations for the outboard first wall (FW) of a magnetic fusion energy (MFE) demonstration (DEMO) reactor. The XSs inducing the major uncertainty in the prediction of activation related quantities have been identified. Results have shown that for times corresponding to maintenance activities the uncertainties effect is insignificant since the dominant XSs involved in these calculations are based on accurate experimental data evaluations. However, for times corresponding to waste management/recycling activities, the errors induced by the XSs uncertainties, which in this case are evaluated using systematic models, must be considered. It has been found that two particular isotopes, 6 Co and 94 Nb, are key contributors to the global DEMO FW activation uncertainty results. In these cases, the benefit from further improvements in the accuracy of the critical reaction XSs is discussed

  16. Additivity of entropic uncertainty relations

    Directory of Open Access Journals (Sweden)

    René Schwonnek

    2018-03-01

    Full Text Available We consider the uncertainty between two pairs of local projective measurements performed on a multipartite system. We show that the optimal bound in any linear uncertainty relation, formulated in terms of the Shannon entropy, is additive. This directly implies, against naive intuition, that the minimal entropic uncertainty can always be realized by fully separable states. Hence, in contradiction to proposals by other authors, no entanglement witness can be constructed solely by comparing the attainable uncertainties of entangled and separable states. However, our result gives rise to a huge simplification for computing global uncertainty bounds as they now can be deduced from local ones. Furthermore, we provide the natural generalization of the Maassen and Uffink inequality for linear uncertainty relations with arbitrary positive coefficients.

  17. Interpretation of the peak areas in gamma-ray spectra that have a large relative uncertainty

    International Nuclear Information System (INIS)

    Korun, M.; Maver Modec, P.; Vodenik, B.

    2012-01-01

    Empirical evidence is provided that the areas of peaks having a relative uncertainty in excess of 30% are overestimated. This systematic influence is of a statistical nature and originates in way the peak-analyzing routine recognizes the small peaks. It is not easy to detect this influence since it is smaller than the peak-area uncertainty. However, the systematic influence can be revealed in repeated measurements under the same experimental conditions, e.g., in background measurements. To evaluate the systematic influence, background measurements were analyzed with the peak-analyzing procedure described by Korun et al. (2008). The magnitude of the influence depends on the relative uncertainty of the peak area and may amount, in the conditions used in the peak analysis, to a factor of 5 at relative uncertainties exceeding 60%. From the measurements, the probability for type-II errors, as a function of the relative uncertainty of the peak area, was extracted. This probability is near zero below an uncertainty of 30% and rises to 90% at uncertainties exceeding 50%. - Highlights: ► A systematic influence affecting small peak areas in gamma-ray spectra is described. ► The influence originates in the peak locating procedure, using a pre-determined sensitivity. ► The predetermined sensitivity makes peak areas with large uncertainties to be overestimated. ► The influence depends on the relative uncertainty of the number of counts in the peak. ► Corrections exceeding a factor of 3 are attained at peak area uncertainties exceeding 60%.

  18. A review of reactor physics uncertainties and validation requirements for the modular high-temperature gas-cooled reactor

    International Nuclear Information System (INIS)

    Baxter, A.M.; Lane, R.K.; Hettergott, E.; Lefler, W.

    1991-01-01

    The important, safety-related, physics parameters for the low-enriched Modular High-Temperature gas-Cooled Reactor (MHTGR) such as control rod worth, shutdown margins, temperature coefficients, and reactivity worths, are considered, and estimates are presented of the uncertainties in the calculated values of these parameters. The basis for the uncertainty estimate in several of the important calculated parameters is reviewed, including the available experimental data used in obtaining these estimates. Based on this review, the additional experimental data needed to complete the validation of the methods used to calculate these parameters is presented. The role of benchmark calculations in validating MHTGR reactor physics data is also considered. (author). 10 refs, 5 figs, 3 tabs

  19. Qualification and application of nuclear reactor accident analysis code with the capability of internal assessment of uncertainty

    International Nuclear Information System (INIS)

    Borges, Ronaldo Celem

    2001-10-01

    This thesis presents an independent qualification of the CIAU code ('Code with the capability of - Internal Assessment of Uncertainty') which is part of the internal uncertainty evaluation process with a thermal hydraulic system code on a realistic basis. This is done by combining the uncertainty methodology UMAE ('Uncertainty Methodology based on Accuracy Extrapolation') with the RELAP5/Mod3.2 code. This allows associating uncertainty band estimates with the results obtained by the realistic calculation of the code, meeting licensing requirements of safety analysis. The independent qualification is supported by simulations with RELAP5/Mod3.2 related to accident condition tests of LOBI experimental facility and to an event which has occurred in Angra 1 nuclear power plant, by comparison with measured results and by establishing uncertainty bands on safety parameter calculated time trends. These bands have indeed enveloped the measured trends. Results from this independent qualification of CIAU have allowed to ascertain the adequate application of a systematic realistic code procedure to analyse accidents with uncertainties incorporated in the results, although there is an evident need of extending the uncertainty data base. It has been verified that use of the code with this internal assessment of uncertainty is feasible in the design and license stages of a NPP. (author)

  20. Uncertainties in bone (knee region) in vivo monitoring

    International Nuclear Information System (INIS)

    Venturini, Luzia; Sordi, Gian-Maria A.A.; Vanin, Vito R.

    2008-01-01

    Full text: The bones in the knee region are among the possible choices to estimate radionuclide deposit in the skeleton. Finding the optimum measurement conditions requires the determination of the uncertainties and their relationship with the detector arrangement in the available space, variations in bone anatomy, and non-uniformity in radionuclide deposit. In this work, geometric models for the bones in the knee region and Monte Carlo simulation of the measurement efficiency were used to estimate uncertainties in the in vivo monitoring in the 46 -- 186 keV gamma-ray energy range. The bone models are based on geometrical figures such as ellipsoids and cylinders and have already been published elsewhere. Their parameters are diameters, axis orientations, lengths, and relative positions determined from a survey on real pieces. A 1.70 m tall person was used as a reference; bone model parameters for 1.50 m and 1.90 m tall persons were deduced from the previously published data, to evaluate the uncertainties related to bone size. The simulated experimental arrangement consisted of four HPGe detectors measuring radiation from the knees in the bed geometry; uncertainties from radionuclide deposit distribution, compact bone density and bone size were also included. The detectors were placed at 22 cm height from the bed and it was assumed that the part of the bones seen by the detectors consists in the first 25 cm from the patella, both in feet and hip directions. The cover tissue was not taken as an uncertainty source, but its effect on the final detection efficiency was taken into account. The calculations consider the main interaction types between radiation and the detector crystal, and the radiation attenuation in the bones and the layers of materials between bones and detectors. It was found that the uncertainties depend strongly on the hypotheses made. For example, for 46 keV gamma-rays, a 1.70 m tall person with normal bone density and radionuclide deposit in the

  1. Uncertainty characterization of HOAPS 3.3 latent heat-flux-related parameters

    Science.gov (United States)

    Liman, Julian; Schröder, Marc; Fennig, Karsten; Andersson, Axel; Hollmann, Rainer

    2018-03-01

    instantaneous point of view, random retrieval uncertainties are specifically large over the subtropics with a global average of 37 W m-2. In a climatological sense, their magnitudes become negligible, as do respective sampling uncertainties. Regional and seasonal analyses suggest that largest total LHF uncertainties are seen over the Gulf Stream and the Indian monsoon region during boreal winter. In light of the uncertainty measures, the observed continuous global mean LHF increase up to 2009 needs to be treated with caution. The demonstrated approach can easily be transferred to other satellite retrievals, which increases the significance of the present work.

  2. Measurement uncertainty and probability

    CERN Document Server

    Willink, Robin

    2013-01-01

    A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.

  3. Effects of correlated parameters and uncertainty in electronic-structure-based chemical kinetic modelling

    Science.gov (United States)

    Sutton, Jonathan E.; Guo, Wei; Katsoulakis, Markos A.; Vlachos, Dionisios G.

    2016-04-01

    Kinetic models based on first principles are becoming common place in heterogeneous catalysis because of their ability to interpret experimental data, identify the rate-controlling step, guide experiments and predict novel materials. To overcome the tremendous computational cost of estimating parameters of complex networks on metal catalysts, approximate quantum mechanical calculations are employed that render models potentially inaccurate. Here, by introducing correlative global sensitivity analysis and uncertainty quantification, we show that neglecting correlations in the energies of species and reactions can lead to an incorrect identification of influential parameters and key reaction intermediates and reactions. We rationalize why models often underpredict reaction rates and show that, despite the uncertainty being large, the method can, in conjunction with experimental data, identify influential missing reaction pathways and provide insights into the catalyst active site and the kinetic reliability of a model. The method is demonstrated in ethanol steam reforming for hydrogen production for fuel cells.

  4. User's guide for ALEX: uncertainty propagation from raw data to final results for ORELA transmission measurements

    International Nuclear Information System (INIS)

    Larson, N.M.

    1984-02-01

    This report describes a computer code (ALEX) developed to assist in AnaLysis of EXperimental data at the Oak Ridge Electron Linear Accelerator (ORELA). Reduction of data from raw numbers (counts per channel) to physically meaningful quantities (such as cross sections) is in itself a complicated procedure; propagation of experimental uncertainties through that reduction procedure has in the past been viewed as even more difficult - if not impossible. The purpose of the code ALEX is to correctly propagate all experimental uncertainties through the entire reduction procedure, yielding the complete covariance matrix for the reduced data, while requiring little additional input from the eperimentalist beyond that which is required for the data reduction itself. This report describes ALEX in detail, with special attention given to the case of transmission measurements (the code itself is applicable, with few changes, to any type of data). Application to the natural iron measurements of D.C. Larson et al. is described in some detail

  5. Model uncertainty and probability

    International Nuclear Information System (INIS)

    Parry, G.W.

    1994-01-01

    This paper discusses the issue of model uncertainty. The use of probability as a measure of an analyst's uncertainty as well as a means of describing random processes has caused some confusion, even though the two uses are representing different types of uncertainty with respect to modeling a system. The importance of maintaining the distinction between the two types is illustrated with a simple example

  6. Uncertainty in perception and the Hierarchical Gaussian Filter

    Directory of Open Access Journals (Sweden)

    Christoph Daniel Mathys

    2014-11-01

    Full Text Available In its full sense, perception rests on an agent’s model of how its sensory input comes about and the inferences it draws based on this model. These inferences are necessarily uncertain. Here, we illustrate how the hierarchical Gaussian filter (HGF offers a principled and generic way to deal with the several forms that uncertainty in perception takes. The HGF is a recent derivation of one-step update equations from Bayesian principles that rests on a hierarchical generative model of the environment and its (instability. It is computationally highly efficient, allows for online estimates of hidden states, and has found numerous applications to experimental data from human subjects. In this paper, we generalize previous descriptions of the HGF and its account of perceptual uncertainty. First, we explicitly formulate the extension of the HGF’s hierarchy to any number of levels; second, we discuss how various forms of uncertainty are accommodated by the minimization of variational free energy as encoded in the update equations; third, we combine the HGF with decision models and demonstrate the inversion of this combination; finally, we report a simulation study that compared four optimization methods for inverting the HGF/decision model combination at different noise levels. These four methods (Nelder-Mead simplex algorithm, Gaussian process-based global optimization, variational Bayes and Markov chain Monte Carlo sampling all performed well even under considerable noise, with variational Bayes offering the best combination of efficiency and informativeness of inference. Our results demonstrate that the HGF provides a principled, flexible, and efficient - but at the same time intuitive - framework for the resolution of perceptual uncertainty in behaving agents.

  7. An approach of sensitivity and uncertainty analyses methods installation in a safety calculation

    International Nuclear Information System (INIS)

    Pepin, G.; Sallaberry, C.

    2003-01-01

    Simulation of the migration in deep geological formations leads to solve convection-diffusion equations in porous media, associated with the computation of hydrogeologic flow. Different time-scales (simulation during 1 million years), scales of space, contrasts of properties in the calculation domain, are taken into account. This document deals more particularly with uncertainties on the input data of the model. These uncertainties are taken into account in total analysis with the use of uncertainty and sensitivity analysis. ANDRA (French national agency for the management of radioactive wastes) carries out studies on the treatment of input data uncertainties and their propagation in the models of safety, in order to be able to quantify the influence of input data uncertainties of the models on the various indicators of safety selected. The step taken by ANDRA consists initially of 2 studies undertaken in parallel: - the first consists of an international review of the choices retained by ANDRA foreign counterparts to carry out their uncertainty and sensitivity analysis, - the second relates to a review of the various methods being able to be used in sensitivity and uncertainty analysis in the context of ANDRA's safety calculations. Then, these studies are supplemented by a comparison of the principal methods on a test case which gathers all the specific constraints (physical, numerical and data-processing) of the problem studied by ANDRA

  8. A new uncertainty importance measure

    International Nuclear Information System (INIS)

    Borgonovo, E.

    2007-01-01

    Uncertainty in parameters is present in many risk assessment problems and leads to uncertainty in model predictions. In this work, we introduce a global sensitivity indicator which looks at the influence of input uncertainty on the entire output distribution without reference to a specific moment of the output (moment independence) and which can be defined also in the presence of correlations among the parameters. We discuss its mathematical properties and highlight the differences between the present indicator, variance-based uncertainty importance measures and a moment independent sensitivity indicator previously introduced in the literature. Numerical results are discussed with application to the probabilistic risk assessment model on which Iman [A matrix-based approach to uncertainty and sensitivity analysis for fault trees. Risk Anal 1987;7(1):22-33] first introduced uncertainty importance measures

  9. Phenomenological uncertainty analysis of early containment failure at severe accident of nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Su Won

    2011-02-15

    The severe accident has inherently significant uncertainty due to wide range of conditions and performing experiments, validation and practical application are extremely difficult because of its high temperature and pressure. Although internal and external researches were put into practice, the reference used in Korean nuclear plants were foreign data of 1980s and safety analysis as the probabilistic safety assessment has not applied the newest methodology. Also, it is applied to containment pressure formed into point value as results of thermal hydraulic analysis to identify the probability of containment failure in level 2 PSA. In this paper, the uncertainty analysis methods for phenomena of severe accident influencing early containment failure were developed, the uncertainty analysis that apply Korean nuclear plants using the MELCOR code was performed and it is a point of view to present the distribution of containment pressure as a result of uncertainty analysis. Because early containment failure is important factor of Large Early Release Frequency(LERF) that is used as representative criteria of decision-making in nuclear power plants, it was selected in this paper among various modes of containment failure. Important phenomena of early containment failure at severe accident based on previous researches were comprehended and methodology of 7th steps to evaluate uncertainty was developed. The MELCOR input for analysis of the severe accident reflected natural circulation flow was developed and the accident scenario for station black out that was representative initial event of early containment failure was determined. By reviewing the internal model and correlation for MELCOR model relevant important phenomena of early containment failure, the uncertainty factors which could affect on the uncertainty were founded and the major factors were finally identified through the sensitivity analysis. In order to determine total number of MELCOR calculations which can

  10. Antineutrinos from Earth: A reference model and its uncertainties

    International Nuclear Information System (INIS)

    Mantovani, Fabio; Carmignani, Luigi; Fiorentini, Gianni; Lissia, Marcello

    2004-01-01

    We predict geoneutrino fluxes in a reference model based on a detailed description of Earth's crust and mantle and using the best available information on the abundances of uranium, thorium, and potassium inside Earth's layers. We estimate the uncertainties of fluxes corresponding to the uncertainties of the element abundances. In addition to distance integrated fluxes, we also provide the differential fluxes as a function of distance from several sites of experimental interest. Event yields at several locations are estimated and their dependence on the neutrino oscillation parameters is discussed. At Kamioka we predict N(U+Th)=35±6 events for 10 32 proton yr and 100% efficiency assuming sin 2 (2θ)=0.863 and δm 2 =7.3x10 -5 eV 2 . The maximal prediction is 55 events, obtained in a model with fully radiogenic production of the terrestrial heat flow

  11. Assessment and propagation of mechanical property uncertainties in fatigue life prediction of composite laminates

    DEFF Research Database (Denmark)

    Castro, Oscar; Branner, Kim; Dimitrov, Nikolay Krasimirov

    2018-01-01

    amplitude loading cycles. Fatigue life predictions of unidirectional and multi-directional glass/epoxy laminates are carried out to validate the proposed model against experimental data. The probabilistic fatigue behavior of laminates is analyzed under constant amplitude loading conditions as well as under......A probabilistic model for estimating the fatigue life of laminated composite materials considering the uncertainty in their mechanical properties is developed. The uncertainty in the material properties is determined from fatigue coupon tests. Based on this uncertainty, probabilistic constant life...... diagrams are developed which can efficiently estimate probabilistic É›-N curves at any load level and stress ratio. The probabilistic É›-N curve information is used in a reliability analysis for fatigue limit state proposed for estimating the probability of failure of composite laminates under variable...

  12. Measurement of nuclear activity with Ge detectors and its uncertainty

    International Nuclear Information System (INIS)

    Cortes P, C.A.

    1999-01-01

    The objective of this work is to analyse the influence magnitudes which affect the activity measurement of gamma transmitter isolated radioactive sources. They prepared by means of the gravimetric method, as well as, determining the uncertainty of such measurement when this is carried out with a gamma spectrometer system with a germanium detector. This work is developed in five chapters: In the first one, named Basic principles it is made a brief description about the meaning of the word Measurement and its implications and the necessaries concepts are presented which are used in this work. In the second chapter it is exposed the gravimetric method used for the manufacture of the gamma transmitter isolated radioactive sources, it is tackled the problem to determine the main influence magnitudes which affect in the measurement of their activity and the respective correction factors and their uncertainties are deduced. The third chapter describes the gamma spectrometry system which is used in this work for the measurement of the activity of isolated sources and also its performance and experimental arrangement that it is used. In the fourth chapter are applied the three previous items with the object of determining the uncertainty which would be obtained in the measurement of an isolated radioactive source elaborated with the gravimetric method in the experimental conditions less favourable predicted above the obtained results from the chapter two. The conclusions are presented in the fifth chapter and they are applied to establish the optimum conditions for the measurement of the activity of a gamma transmitter isolated radioactive source with a spectrometer with germanium detector. (Author)

  13. Measurement and uncertainties of energy loss in silicon over a wide Z sub 1 range using time of flight detector telescopes

    CERN Document Server

    Whitlow, H J; Elliman, R G; Weijers, T D M; Zhang Yan Wen; O'connor, D J

    2002-01-01

    The energy loss of projectiles with Z sub 1 in the range 3-26 has been experimentally measured in the 0.1-0.7 MeV per nucleon energy range in the same Si stopping foil of 105.5 mu g cm sup - sup 2 thickness using a time of flight-energy (ToF-E) elastic recoil detection analysis (ERDA) setup. A detailed study of the experimental uncertainties for ToF-E and ToF-ToF-E configuration has been made. For ERDA configurations where the energy calibration is taken against the edge positions small uncertainties in the angle at which recoils are detected can introduce significant absolute uncertainty. The relative uncertainty contribution is dominated by the energy calibration of the Si E detector for the ToF-E configuration and the position of the second ToF detector in ToF-ToF-E measurements. The much smaller calibration uncertainty for ToF-ToF-E configuration implies this technique is superior to ToF-E measurements with Si E detectors. At low energies the effect of charge changing in the time detector foils can become...

  14. Neutron total cross section measurements of gold and tantalum at the nELBE photoneutron source

    CERN Document Server

    Hannaske, Roland; Beyer, Roland; Junghans, Arnd; Bemmerer, Daniel; Birgersson, Evert; Ferrari, Anna; Grosse, Eckart; Kempe, Mathias; Kögler, Toni; Marta, Michele; Massarczyk, Ralph; Matic, Andrija; Schramm, Georg; Schwengner, Ronald; Wagner, Andreas

    2014-01-01

    Neutron total cross sections of 197 Au and nat Ta have been measured at the nELBE photoneutron source in the energy range from 0.1 - 10 MeV with a statistical uncertainty of up to 2 % and a total systematic uncertainty of 1 %. This facility is optimized for the fast neutron energy range and combines an excellent t ime structure of the neutron pulses (electron bunch width 5 ps) with a short flight path of 7 m. Because of the low instantaneous neutron flux transmission measurements of neutron total cross sections are possible, that exhibit very different beam and back ground conditions than found at other neutron sources.

  15. Uncertainties of Predictions from Parton Distribution Functions 1, the Lagrange Multiplier Method

    CERN Document Server

    Stump, D R; Brock, R; Casey, D; Huston, J; Kalk, J; Lai, H L; Tung, W K

    2002-01-01

    We apply the Lagrange Multiplier method to study the uncertainties of physical predictions due to the uncertainties of parton distribution functions (PDFs), using the cross section for W production at a hadron collider as an archetypal example. An effective chi-squared function based on the CTEQ global QCD analysis is used to generate a series of PDFs, each of which represents the best fit to the global data for some specified value of the cross section. By analyzing the likelihood of these "alterative hypotheses", using available information on errors from the individual experiments, we estimate that the fractional uncertainty of the cross section due to current experimental input to the PDF analysis is approximately 4% at the Tevatron, and 10% at the LHC. We give sets of PDFs corresponding to these up and down variations of the cross section. We also present similar results on Z production at the colliders. Our method can be applied to any combination of physical variables in precision QCD phenomenology, an...

  16. Uncertainty evaluation in correlated quantities: application to elemental analysis of atmospheric aerosols

    International Nuclear Information System (INIS)

    Espinosa, A.; Miranda, J.; Pineda, J. C.

    2010-01-01

    One of the aspects that are frequently overlooked in the evaluation of uncertainty in experimental data is the possibility that the involved quantities are correlated among them, due to different causes. An example in the elemental analysis of atmospheric aerosols using techniques like X-ray Fluorescence (X RF) or Particle Induced X-ray Emission (PIXE). In these cases, the measured elemental concentrations are highly correlated, and then are used to obtain information about other variables, such as the contribution from emitting sources related to soil, sulfate, non-soil potassium or organic matter. This work describes, as an example, the method required to evaluate the uncertainty in variables determined from correlated quantities from a set of atmospheric aerosol samples collected in the Metropolitan Area of the Mexico Valley and analyzed with PIXE. The work is based on the recommendations of the Guide for the Evaluation of Uncertainty published by the International Organization for Standardization. (Author)

  17. Explicit tracking of uncertainty increases the power of quantitative rule-of-thumb reasoning in cell biology.

    Science.gov (United States)

    Johnston, Iain G; Rickett, Benjamin C; Jones, Nick S

    2014-12-02

    Back-of-the-envelope or rule-of-thumb calculations involving rough estimates of quantities play a central scientific role in developing intuition about the structure and behavior of physical systems, for example in so-called Fermi problems in the physical sciences. Such calculations can be used to powerfully and quantitatively reason about biological systems, particularly at the interface between physics and biology. However, substantial uncertainties are often associated with values in cell biology, and performing calculations without taking this uncertainty into account may limit the extent to which results can be interpreted for a given problem. We present a means to facilitate such calculations where uncertainties are explicitly tracked through the line of reasoning, and introduce a probabilistic calculator called CALADIS, a free web tool, designed to perform this tracking. This approach allows users to perform more statistically robust calculations in cell biology despite having uncertain values, and to identify which quantities need to be measured more precisely to make confident statements, facilitating efficient experimental design. We illustrate the use of our tool for tracking uncertainty in several example biological calculations, showing that the results yield powerful and interpretable statistics on the quantities of interest. We also demonstrate that the outcomes of calculations may differ from point estimates when uncertainty is accurately tracked. An integral link between CALADIS and the BioNumbers repository of biological quantities further facilitates the straightforward location, selection, and use of a wealth of experimental data in cell biological calculations. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Bayesian uncertainty analysis with applications to turbulence modeling

    International Nuclear Information System (INIS)

    Cheung, Sai Hung; Oliver, Todd A.; Prudencio, Ernesto E.; Prudhomme, Serge; Moser, Robert D.

    2011-01-01

    In this paper, we apply Bayesian uncertainty quantification techniques to the processes of calibrating complex mathematical models and predicting quantities of interest (QoI's) with such models. These techniques also enable the systematic comparison of competing model classes. The processes of calibration and comparison constitute the building blocks of a larger validation process, the goal of which is to accept or reject a given mathematical model for the prediction of a particular QoI for a particular scenario. In this work, we take the first step in this process by applying the methodology to the analysis of the Spalart-Allmaras turbulence model in the context of incompressible, boundary layer flows. Three competing model classes based on the Spalart-Allmaras model are formulated, calibrated against experimental data, and used to issue a prediction with quantified uncertainty. The model classes are compared in terms of their posterior probabilities and their prediction of QoI's. The model posterior probability represents the relative plausibility of a model class given the data. Thus, it incorporates the model's ability to fit experimental observations. Alternatively, comparing models using the predicted QoI connects the process to the needs of decision makers that use the results of the model. We show that by using both the model plausibility and predicted QoI, one has the opportunity to reject some model classes after calibration, before subjecting the remaining classes to additional validation challenges.

  19. Uncertainty calculations made easier

    International Nuclear Information System (INIS)

    Hogenbirk, A.

    1994-07-01

    The results are presented of a neutron cross section sensitivity/uncertainty analysis performed in a complicated 2D model of the NET shielding blanket design inside the ITER torus design, surrounded by the cryostat/biological shield as planned for ITER. The calculations were performed with a code system developed at ECN Petten, with which sensitivity/uncertainty calculations become relatively simple. In order to check the deterministic neutron transport calculations (performed with DORT), calculations were also performed with the Monte Carlo code MCNP. Care was taken to model the 2.0 cm wide gaps between two blanket segments, as the neutron flux behind the vacuum vessel is largely determined by neutrons streaming through these gaps. The resulting neutron flux spectra are in excellent agreement up to the end of the cryostat. It is noted, that at this position the attenuation of the neutron flux is about 1 l orders of magnitude. The uncertainty in the energy integrated flux at the beginning of the vacuum vessel and at the beginning of the cryostat was determined in the calculations. The uncertainty appears to be strongly dependent on the exact geometry: if the gaps are filled with stainless steel, the neutron spectrum changes strongly, which results in an uncertainty of 70% in the energy integrated flux at the beginning of the cryostat in the no-gap-geometry, compared to an uncertainty of only 5% in the gap-geometry. Therefore, it is essential to take into account the exact geometry in sensitivity/uncertainty calculations. Furthermore, this study shows that an improvement of the covariance data is urgently needed in order to obtain reliable estimates of the uncertainties in response parameters in neutron transport calculations. (orig./GL)

  20. Unexpected uncertainty, volatility and decision-making

    Directory of Open Access Journals (Sweden)

    Amy Rachel Bland

    2012-06-01

    Full Text Available The study of uncertainty in decision making is receiving greater attention in the fields of cognitive and computational neuroscience. Several lines of evidence are beginning to elucidate different variants of uncertainty. Particularly, risk, ambiguity and expected and unexpected forms of uncertainty are well articulated in the literature. In this article we review both empirical and theoretical evidence arguing for the potential distinction between three forms of uncertainty; expected uncertainty, unexpected uncertainty and volatility. Particular attention will be devoted to exploring the distinction between unexpected uncertainty and volatility which has been less appreciated in the literature. This includes evidence from computational modelling, neuromodulation, neuroimaging and electrophysiological studies. We further address the possible differentiation of cognitive control mechanisms used to deal with these forms of uncertainty. Particularly we explore a role for conflict monitoring and the temporal integration of information into working memory. Finally, we explore whether the Dual Modes of Control theory provides a theoretical framework for understanding the distinction between unexpected uncertainty and volatility.