WorldWideScience

Sample records for experimental criticality benchmarks

  1. International handbook of evaluated criticality safety benchmark experiments

    International Nuclear Information System (INIS)

    2010-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Organization for Economic Cooperation and Development - Nuclear Energy Agency (OECD-NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirement and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span over 55,000 pages and contain 516 evaluations with benchmark specifications for 4,405 critical, near critical, or subcritical configurations, 24 criticality alarm placement / shielding configurations with multiple dose points for each, and 200 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these evaluations; however, benchmark specifications are not derived for such experiments (in some cases models are provided in an appendix). Approximately 770 experimental configurations are categorized as unacceptable for use as criticality safety benchmark experiments. Additional evaluations are in progress and will be

  2. The International Criticality Safety Benchmark Evaluation Project

    International Nuclear Information System (INIS)

    Briggs, B. J.; Dean, V. F.; Pesic, M. P.

    2001-01-01

    In order to properly manage the risk of a nuclear criticality accident, it is important to establish the conditions for which such an accident becomes possible for any activity involving fissile material. Only when this information is known is it possible to establish the likelihood of actually achieving such conditions. It is therefore important that criticality safety analysts have confidence in the accuracy of their calculations. Confidence in analytical results can only be gained through comparison of those results with experimental data. The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the US Department of Energy. The project was managed through the Idaho National Engineering and Environmental Laboratory (INEEL), but involved nationally known criticality safety experts from Los Alamos National Laboratory, Lawrence Livermore National Laboratory, Savannah River Technology Center, Oak Ridge National Laboratory and the Y-12 Plant, Hanford, Argonne National Laboratory, and the Rocky Flats Plant. An International Criticality Safety Data Exchange component was added to the project during 1994 and the project became what is currently known as the International Criticality Safety Benchmark Evaluation Project (ICSBEP). Representatives from the United Kingdom, France, Japan, the Russian Federation, Hungary, Kazakhstan, Korea, Slovenia, Yugoslavia, Spain, and Israel are now participating on the project In December of 1994, the ICSBEP became an official activity of the Organization for Economic Cooperation and Development - Nuclear Energy Agency's (OECD-NEA) Nuclear Science Committee. The United States currently remains the lead country, providing most of the administrative support. The purpose of the ICSBEP is to: (1) identify and evaluate a comprehensive set of critical benchmark data; (2) verify the data, to the extent possible, by reviewing original and subsequently revised documentation, and by talking with the

  3. Handbook of critical experiments benchmarks

    International Nuclear Information System (INIS)

    Durst, B.M.; Bierman, S.R.; Clayton, E.D.

    1978-03-01

    Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input

  4. Criticality safety benchmarking of PASC-3 and ECNJEF1.1

    International Nuclear Information System (INIS)

    Li, J.

    1992-09-01

    To validate the code system PASC-3 and the multigroup cross section library ECNJEF1.1 on various applications many benchmarks are required. This report presents the results of critically safety benchmarking for five calculational and four experimental benchmarks. These benchmarks are related to the transport package of fissile materials such as spent fuel. The fissile nuclides in these benchmarks are 235 U and 239 Pu. The modules of PASC-3 which have been used for the calculations are BONAMI, NITAWL and KENO.5A. The final results for the experimental benchmarks do agree well with experimental data. For the calculational benchmarks the results presented here are in reasonable agreement with the results from other investigations. (author). 8 refs.; 20 figs.; 5 tabs

  5. Benchmarking of HEU mental annuli critical assemblies with internally reflected graphite cylinder

    Directory of Open Access Journals (Sweden)

    Xiaobo Liu

    2017-01-01

    Full Text Available Three experimental configurations of critical assemblies, performed in 1963 at the Oak Ridge Critical Experiment Facility, which are assembled using three different diameter HEU annuli (15-9 inches, 15-7 inches and 13-7 inches metal annuli with internally reflected graphite cylinder are evaluated and benchmarked. The experimental uncertainties which are 0.00057, 0.00058 and 0.00057 respectively, and biases to the benchmark models which are − 0.00286, − 0.00242 and − 0.00168 respectively, were determined, and the experimental benchmark keff results were obtained for both detailed and simplified models. The calculation results for both detailed and simplified models using MCNP6-1.0 and ENDF/B-VII.1 agree well to the benchmark experimental results within difference less than 0.2%. The benchmarking results were accepted for the inclusion of ICSBEP Handbook.

  6. ICSBEP-2007, International Criticality Safety Benchmark Experiment Handbook

    International Nuclear Information System (INIS)

    Blair Briggs, J.

    2007-01-01

    unacceptable for use as criticality safety benchmark experiments are discussed in these evaluations; however, benchmark specifications are not derived for such experiments (in some cases models are provided in an appendix). Approximately 676 experimental configurations are categorized as unacceptable for use as criticality safety benchmark experiments. Additional evaluations are in progress and will be added to this documents periodically. The document is organized in a manner that allows easy inclusion of additional evaluations as they become available. This handbook was prepared by a working group comprised of experienced criticality safety personnel from the United States, the United Kingdom, Japan, the Russian Federation, France, Hungary, Republic of Korea, Slovenia, Serbia, Kazakhstan, Israel, Spain, Brazil, Czech Republic, Poland, India, Canada and Sweden

  7. International Criticality Safety Benchmark Evaluation Project (ICSBEP) - ICSBEP 2015 Handbook

    International Nuclear Information System (INIS)

    Bess, John D.

    2015-01-01

    evaluations; however, benchmark specifications are not derived for such experiments (in some cases models are provided in an appendix). Approximately 829 experimental configurations are categorized as unacceptable for use as criticality safety benchmark experiments. Additional evaluations are in progress and will be added to this document periodically. The document is organized in a manner that allows easy inclusion of additional evaluations as they become available. This handbook was prepared by a working group comprised of experienced criticality safety personnel from the United States, the United Kingdom, Japan, the Russian Federation, France, Hungary, Republic of Korea, Slovenia, Serbia, Kazakhstan, Israel, Spain, Brazil, Czech Republic, Poland, India, Canada, P.R. China, Sweden and Argentina

  8. International Handbook of Evaluated Criticality Safety Benchmark Experiments - ICSBEP (DVD), Version 2013

    International Nuclear Information System (INIS)

    2013-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various nuclear critical experiment facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirement and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span nearly 66,000 pages and contain 558 evaluations with benchmark specifications for 4,798 critical, near critical or subcritical configurations, 24 criticality alarm placement/shielding configurations with multiple dose points for each and 200 configurations that have been categorised as fundamental physics measurements that are relevant to criticality safety applications. New to the Handbook are benchmark specifications for Critical, Bare, HEU(93.2)- Metal Sphere experiments referred to as ORSphere that were performed by a team of experimenters at Oak Ridge National Laboratory in the early 1970's. A photograph of this assembly is shown on the front cover

  9. Criticality Benchmark Analysis of Water-Reflected Uranium Oxyfluoride Slabs

    International Nuclear Information System (INIS)

    Marshall, Margaret A.; Bess, John D.

    2009-01-01

    A series of twelve experiments were conducted in the mid 1950's at the Oak Ridge National Laboratory Critical Experiments Facility to determine the critical conditions of a semi-infinite water-reflected slab of aqueous uranium oxyfluoride (UO2F2). A different slab thickness was used for each experiment. Results from the twelve experiment recorded in the laboratory notebook were published in Reference 1. Seven of the twelve experiments were determined to be acceptable benchmark experiments for the inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. This evaluation will not only be available to handbook users for the validation of computer codes and integral cross-section data, but also for the reevaluation of experimental data used in the ANSI/ANS-8.1 standard. This evaluation is important as part of the technical basis of the subcritical slab limits in ANSI/ANS-8.1. The original publication of the experimental results was used for the determination of bias and bias uncertainties for subcritical slab limits, as documented by Hugh Clark's paper 'Subcritical Limits for Uranium-235 Systems'.

  10. Critical power prediction by CATHARE2 of the OECD/NRC BFBT benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lutsanych, Sergii, E-mail: s.lutsanych@ing.unipi.it [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, 56122, San Piero a Grado, Pisa (Italy); Sabotinov, Luben, E-mail: luben.sabotinov@irsn.fr [Institut for Radiological Protection and Nuclear Safety (IRSN), 31 avenue de la Division Leclerc, 92262 Fontenay-aux-Roses (France); D’Auria, Francesco, E-mail: francesco.dauria@dimnp.unipi.it [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, 56122, San Piero a Grado, Pisa (Italy)

    2015-03-15

    Highlights: • We used CATHARE code to calculate the critical power exercises of the OECD/NRC BFBT benchmark. • We considered both steady-state and transient critical power tests of the benchmark. • We used both the 1D and 3D features of the CATHARE code to simulate the experiments. • Acceptable prediction of the critical power and its location in the bundle is obtained using appropriate modelling. - Abstract: This paper presents an application of the French best estimate thermal-hydraulic code CATHARE 2 to calculate the critical power and departure from nucleate boiling (DNB) exercises of the International OECD/NRC BWR Fuel Bundle Test (BFBT) benchmark. The assessment activity is performed comparing the code calculation results with available in the framework of the benchmark experimental data from Japanese Nuclear Power Engineering Corporation (NUPEC). Two-phase flow calculations on prediction of the critical power have been carried out both in steady state and transient cases, using one-dimensional and three-dimensional modelling. Results of the steady-state critical power tests calculation have shown the ability of CATHARE code to predict reasonably the critical power and its location, using appropriate modelling.

  11. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    International Nuclear Information System (INIS)

    Bess, John D.; Montierth, Leland; Köberl, Oliver

    2014-01-01

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the 235 U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of k eff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of k eff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments

  12. Introduction to 'International Handbook of Criticality Safety Benchmark Experiments'

    International Nuclear Information System (INIS)

    Komuro, Yuichi

    1998-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) is now an official activity of the Organization for Economic Cooperation and Development-Nuclear Energy Agency (OECD-NEA). 'International Handbook of Criticality Safety Benchmark Experiments' was prepared and is updated year by year by the working group of the project. This handbook contains criticality safety benchmark specifications that have been derived from experiments that were performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used. The author briefly introduces the informative handbook and would like to encourage Japanese engineers who are in charge of nuclear criticality safety to use the handbook. (author)

  13. Validation of VHTRC calculation benchmark of critical experiment using the MCB code

    Directory of Open Access Journals (Sweden)

    Stanisz Przemysław

    2016-01-01

    Full Text Available The calculation benchmark problem Very High Temperature Reactor Critical (VHTR a pin-in-block type core critical assembly has been investigated with the Monte Carlo Burnup (MCB code in order to validate the latest version of Nuclear Data Library based on ENDF format. Executed benchmark has been made on the basis of VHTR benchmark available from the International Handbook of Evaluated Reactor Physics Benchmark Experiments. This benchmark is useful for verifying the discrepancies in keff values between various libraries and experimental values. This allows to improve accuracy of the neutron transport calculations that may help in designing the high performance commercial VHTRs. Almost all safety parameters depend on the accuracy of neutron transport calculation results that, in turn depend on the accuracy of nuclear data libraries. Thus, evaluation of the libraries applicability to VHTR modelling is one of the important subjects. We compared the numerical experiment results with experimental measurements using two versions of available nuclear data (ENDF-B-VII.1 and JEFF-3.2 prepared for required temperatures. Calculations have been performed with the MCB code which allows to obtain very precise representation of complex VHTR geometry, including the double heterogeneity of a fuel element. In this paper, together with impact of nuclear data, we discuss also the impact of different lattice modelling inside the fuel pins. The discrepancies of keff have been successfully observed and show good agreement with each other and with the experimental data within the 1 σ range of the experimental uncertainty. Because some propagated discrepancies observed, we proposed appropriate corrections in experimental constants which can improve the reactivity coefficient dependency. Obtained results confirm the accuracy of the new Nuclear Data Libraries.

  14. Benchmarking criticality safety calculations with subcritical experiments

    International Nuclear Information System (INIS)

    Mihalczo, J.T.

    1984-06-01

    Calculation of the neutron multiplication factor at delayed criticality may be necessary for benchmarking calculations but it may not be sufficient. The use of subcritical experiments to benchmark criticality safety calculations could result in substantial savings in fuel material costs for experiments. In some cases subcritical configurations could be used to benchmark calculations where sufficient fuel to achieve delayed criticality is not available. By performing a variety of measurements with subcritical configurations, much detailed information can be obtained which can be compared directly with calculations. This paper discusses several measurements that can be performed with subcritical assemblies and presents examples that include comparisons between calculation and experiment where possible. Where not, examples from critical experiments have been used but the measurement methods could also be used for subcritical experiments

  15. The ORSphere Benchmark Evaluation and Its Potential Impact on Nuclear Criticality Safety

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess; Margaret A. Marshall; J. Blair Briggs

    2013-10-01

    In the early 1970’s, critical experiments using an unreflected metal sphere of highly enriched uranium (HEU) were performed with the focus to provide a “very accurate description…as an ideal benchmark for calculational methods and cross-section data files.” Two near-critical configurations of the Oak Ridge Sphere (ORSphere) were evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook). The results from those benchmark experiments were then compared with additional unmoderated and unreflected HEU metal benchmark experiment configurations currently found in the ICSBEP Handbook. For basic geometries (spheres, cylinders, and slabs) the eigenvalues calculated using MCNP5 and ENDF/B-VII.0 were within 3 of their respective benchmark values. There appears to be generally a good agreement between calculated and benchmark values for spherical and slab geometry systems. Cylindrical geometry configurations tended to calculate low, including more complex bare HEU metal systems containing cylinders. The ORSphere experiments do not calculate within their 1s uncertainty and there is a possibility that the effect of the measured uncertainties for the GODIVA I benchmark may need reevaluated. There is significant scatter in the calculations for the highly-correlated ORCEF cylinder experiments, which are constructed from close-fitting HEU discs and annuli. Selection of a nuclear data library can have a larger impact on calculated eigenvalue results than the variation found within calculations of a given experimental series, such as the ORCEF cylinders, using a single nuclear data set.

  16. IAEA coordinated research project (CRP) on 'Analytical and experimental benchmark analyses of accelerator driven systems'

    International Nuclear Information System (INIS)

    Abanades, Alberto; Aliberti, Gerardo; Gohar, Yousry; Talamo, Alberto; Bornos, Victor; Kiyavitskaya, Anna; Carta, Mario; Janczyszyn, Jerzy; Maiorino, Jose; Pyeon, Cheolho; Stanculescu, Alexander; Titarenko, Yury; Westmeier, Wolfram

    2008-01-01

    In December 2005, the International Atomic Energy Agency (IAEA) has started a Coordinated Research Project (CRP) on 'Analytical and Experimental Benchmark Analyses of Accelerator Driven Systems'. The overall objective of the CRP, performed within the framework of the Technical Working Group on Fast Reactors (TWGFR) of IAEA's Nuclear Energy Department, is to increase the capability of interested Member States in developing and applying advanced reactor technologies in the area of long-lived radioactive waste utilization and transmutation. The specific objective of the CRP is to improve the present understanding of the coupling of an external neutron source (e.g. spallation source) with a multiplicative sub-critical core. The participants are performing computational and experimental benchmark analyses using integrated calculation schemes and simulation methods. The CRP aims at integrating some of the planned experimental demonstration projects of the coupling between a sub-critical core and an external neutron source (e.g. YALINA Booster in Belarus, and Kyoto University's Critical Assembly (KUCA)). The objective of these experimental programs is to validate computational methods, obtain high energy nuclear data, characterize the performance of sub-critical assemblies driven by external sources, and to develop and improve techniques for sub-criticality monitoring. The paper summarizes preliminary results obtained to-date for some of the CRP benchmarks. (authors)

  17. The International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    International Nuclear Information System (INIS)

    Briggs, J.B.

    2003-01-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organisation for Economic Cooperation and Development (OECD) - Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Yugoslavia, Kazakhstan, Israel, Spain, and Brazil are now participating. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled 'International Handbook of Evaluated Criticality Safety Benchmark Experiments.' The 2003 Edition of the Handbook contains benchmark model specifications for 3070 critical or subcritical configurations that are intended for validating computer codes that calculate effective neutron multiplication and for testing basic nuclear data. (author)

  18. Criticality safety benchmark experiment on 10% enriched uranyl nitrate solution using a 28-cm-thickness slab core

    International Nuclear Information System (INIS)

    Yamamoto, Toshihiro; Miyoshi, Yoshinori; Kikuchi, Tsukasa; Watanabe, Shouichi

    2002-01-01

    The second series of critical experiments with 10% enriched uranyl nitrate solution using 28-cm-thick slab core have been performed with the Static Experiment Critical Facility of the Japan Atomic Energy Research Institute. Systematic critical data were obtained by changing the uranium concentration of the fuel solution from 464 to 300 gU/l under various reflector conditions. In this paper, the thirteen critical configurations for water-reflected cores and unreflected cores are identified and evaluated. The effects of uncertainties in the experimental data on k eff are quantified by sensitivity studies. Benchmark model specifications that are necessary to construct a calculational model are given. The uncertainties of k eff 's included in the benchmark model specifications are approximately 0.1%Δk eff . The thirteen critical configurations are judged to be acceptable benchmark data. Using the benchmark model specifications, sample calculation results are provided with several sets of standard codes and cross section data. (author)

  19. Analysis of the impact of correlated benchmark experiments on the validation of codes for criticality safety analysis

    International Nuclear Information System (INIS)

    Bock, M.; Stuke, M.; Behler, M.

    2013-01-01

    The validation of a code for criticality safety analysis requires the recalculation of benchmark experiments. The selected benchmark experiments are chosen such that they have properties similar to the application case that has to be assessed. A common source of benchmark experiments is the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments' (ICSBEP Handbook) compiled by the 'International Criticality Safety Benchmark Evaluation Project' (ICSBEP). In order to take full advantage of the information provided by the individual benchmark descriptions for the application case, the recommended procedure is to perform an uncertainty analysis. The latter is based on the uncertainties of experimental results included in most of the benchmark descriptions. They can be performed by means of the Monte Carlo sampling technique. The consideration of uncertainties is also being introduced in the supplementary sheet of DIN 25478 'Application of computer codes in the assessment of criticality safety'. However, for a correct treatment of uncertainties taking into account the individual uncertainties of the benchmark experiments is insufficient. In addition, correlations between benchmark experiments have to be handled correctly. For example, these correlations can arise due to different cases of a benchmark experiment sharing the same components like fuel pins or fissile solutions. Thus, manufacturing tolerances of these components (e.g. diameter of the fuel pellets) have to be considered in a consistent manner in all cases of the benchmark experiment. At the 2012 meeting of the Expert Group on 'Uncertainty Analysis for Criticality Safety Assessment' (UACSA) of the OECD/NEA a benchmark proposal was outlined that aimed for the determination of the impact on benchmark correlations on the estimation of the computational bias of the neutron multiplication factor (k eff ). The analysis presented here is based on this proposal. (orig.)

  20. IAEA coordinated research project (CRP) on 'Analytical and experimental benchmark analyses of accelerator driven systems'

    Energy Technology Data Exchange (ETDEWEB)

    Abanades, Alberto [Universidad Politecnica de Madrid (Spain); Aliberti, Gerardo; Gohar, Yousry; Talamo, Alberto [ANL, Argonne (United States); Bornos, Victor; Kiyavitskaya, Anna [Joint Institute of Power Eng. and Nucl. Research ' Sosny' , Minsk (Belarus); Carta, Mario [ENEA, Casaccia (Italy); Janczyszyn, Jerzy [AGH-University of Science and Technology, Krakow (Poland); Maiorino, Jose [IPEN, Sao Paulo (Brazil); Pyeon, Cheolho [Kyoto University (Japan); Stanculescu, Alexander [IAEA, Vienna (Austria); Titarenko, Yury [ITEP, Moscow (Russian Federation); Westmeier, Wolfram [Wolfram Westmeier GmbH, Ebsdorfergrund (Germany)

    2008-07-01

    In December 2005, the International Atomic Energy Agency (IAEA) has started a Coordinated Research Project (CRP) on 'Analytical and Experimental Benchmark Analyses of Accelerator Driven Systems'. The overall objective of the CRP, performed within the framework of the Technical Working Group on Fast Reactors (TWGFR) of IAEA's Nuclear Energy Department, is to increase the capability of interested Member States in developing and applying advanced reactor technologies in the area of long-lived radioactive waste utilization and transmutation. The specific objective of the CRP is to improve the present understanding of the coupling of an external neutron source (e.g. spallation source) with a multiplicative sub-critical core. The participants are performing computational and experimental benchmark analyses using integrated calculation schemes and simulation methods. The CRP aims at integrating some of the planned experimental demonstration projects of the coupling between a sub-critical core and an external neutron source (e.g. YALINA Booster in Belarus, and Kyoto University's Critical Assembly (KUCA)). The objective of these experimental programs is to validate computational methods, obtain high energy nuclear data, characterize the performance of sub-critical assemblies driven by external sources, and to develop and improve techniques for sub-criticality monitoring. The paper summarizes preliminary results obtained to-date for some of the CRP benchmarks. (authors)

  1. Processing and benchmarking of evaluated nuclear data file/b-viii.0β4 cross-section library by analysis of a series of critical experimental benchmark using the monte carlo code MCNP(X and NJOY2016

    Directory of Open Access Journals (Sweden)

    Kabach Ouadie

    2017-12-01

    Full Text Available To validate the new Evaluated Nuclear Data File (ENDF/B-VIII.0β4 library, 31 different critical cores were selected and used for a benchmark test of the important parameter keff. The four utilized libraries are processed using Nuclear Data Processing Code (NJOY2016. The results obtained with the ENDF/B-VIII.0β4 library were compared against those calculated with ENDF/B-VI.8, ENDF/B-VII.0, and ENDF/B-VII.1 libraries using the Monte Carlo N-Particle (MCNP(X code. All the MCNP(X calculations of keff values with these four libraries were compared with the experimentally measured results, which are available in the International Critically Safety Benchmark Evaluation Project. The obtained results are discussed and analyzed in this paper.

  2. Benchmark assemblies of the Los Alamos critical assemblies facility

    International Nuclear Information System (INIS)

    Dowdy, E.J.

    1986-01-01

    Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described. (author)

  3. Benchmark assemblies of the Los Alamos Critical Assemblies Facility

    International Nuclear Information System (INIS)

    Dowdy, E.J.

    1985-01-01

    Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described

  4. Benchmark assemblies of the Los Alamos critical assemblies facility

    International Nuclear Information System (INIS)

    Dowdy, E.J.

    1985-01-01

    Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described

  5. Preparation of a criticality benchmark based on experiments performed at the RA-6 reactor

    International Nuclear Information System (INIS)

    Bazzana, S.; Blaumann, H; Marquez Damian, J.I

    2009-01-01

    The operation and fuel management of a reactor uses neutronic modeling to predict its behavior in operational and accidental conditions. This modeling uses computational tools and nuclear data that must be contrasted against benchmark experiments to ensure its accuracy. These benchmarks have to be simple enough to be possible to model with the desired computer code and have quantified and bound uncertainties. The start-up of the RA-6 reactor, final stage of the conversion and renewal project, allowed us to obtain experimental results with fresh fuel. In this condition the material composition of the fuel elements is precisely known, which contributes to a more precise modeling of the critical condition. These experimental results are useful to evaluate the precision of the models used to design the core, based on U 3 Si 2 and cadmium wires as burnable poisons, for which no data was previously available. The analysis of this information can be used to validate models for the analysis of similar configurations, which is necessary to follow the operational history of the reactor and perform fuel management. The analysis of the results and the generation of the model were done following the methodology established by International Criticality Safety Benchmark Evaluation Project, which gathers and analyzes experimental data for critical systems. The results were very satisfactory resulting on a value for the multiplication factor of the model of 1.0000 ± 0.0044, and a calculated value of 0.9980 ± 0.0001 using MCNP 5 and ENDF/B-VI. The utilization of as-built dimensions and compositions, and the sensitivity analysis allowed us to review the design calculations and analyze their precision, accuracy and error compensation. [es

  6. Critical Assessment of Metagenome Interpretation-a benchmark of metagenomics software.

    Science.gov (United States)

    Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter; Koslicki, David; Janssen, Stefan; Dröge, Johannes; Gregor, Ivan; Majda, Stephan; Fiedler, Jessika; Dahms, Eik; Bremges, Andreas; Fritz, Adrian; Garrido-Oter, Ruben; Jørgensen, Tue Sparholt; Shapiro, Nicole; Blood, Philip D; Gurevich, Alexey; Bai, Yang; Turaev, Dmitrij; DeMaere, Matthew Z; Chikhi, Rayan; Nagarajan, Niranjan; Quince, Christopher; Meyer, Fernando; Balvočiūtė, Monika; Hansen, Lars Hestbjerg; Sørensen, Søren J; Chia, Burton K H; Denis, Bertrand; Froula, Jeff L; Wang, Zhong; Egan, Robert; Don Kang, Dongwan; Cook, Jeffrey J; Deltel, Charles; Beckstette, Michael; Lemaitre, Claire; Peterlongo, Pierre; Rizk, Guillaume; Lavenier, Dominique; Wu, Yu-Wei; Singer, Steven W; Jain, Chirag; Strous, Marc; Klingenberg, Heiner; Meinicke, Peter; Barton, Michael D; Lingner, Thomas; Lin, Hsin-Hung; Liao, Yu-Chieh; Silva, Genivaldo Gueiros Z; Cuevas, Daniel A; Edwards, Robert A; Saha, Surya; Piro, Vitor C; Renard, Bernhard Y; Pop, Mihai; Klenk, Hans-Peter; Göker, Markus; Kyrpides, Nikos C; Woyke, Tanja; Vorholt, Julia A; Schulze-Lefert, Paul; Rubin, Edward M; Darling, Aaron E; Rattei, Thomas; McHardy, Alice C

    2017-11-01

    Methods for assembly, taxonomic profiling and binning are key to interpreting metagenome data, but a lack of consensus about benchmarking complicates performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on highly complex and realistic data sets, generated from ∼700 newly sequenced microorganisms and ∼600 novel viruses and plasmids and representing common experimental setups. Assembly and genome binning programs performed well for species represented by individual genomes but were substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below family level. Parameter settings markedly affected performance, underscoring their importance for program reproducibility. The CAMI results highlight current challenges but also provide a roadmap for software selection to answer specific research questions.

  7. Benchmark criticality experiments for fast fission configuration with high enriched nuclear fuel

    International Nuclear Information System (INIS)

    Sikorin, S.N.; Mandzik, S.G.; Polazau, S.A.; Hryharovich, T.K.; Damarad, Y.V.; Palahina, Y.A.

    2014-01-01

    Benchmark criticality experiments of fast heterogeneous configuration with high enriched uranium (HEU) nuclear fuel were performed using the 'Giacint' critical assembly of the Joint Institute for Power and Nuclear Research - Sosny (JIPNR-Sosny) of the National Academy of Sciences of Belarus. The critical assembly core comprised fuel assemblies without a casing for the 34.8 mm wrench. Fuel assemblies contain 19 fuel rods of two types. The first type is metal uranium fuel rods with 90% enrichment by U-235; the second one is dioxide uranium fuel rods with 36% enrichment by U-235. The total fuel rods length is 620 mm, and the active fuel length is 500 mm. The outer fuel rods diameter is 7 mm, the wall is 0.2 mm thick, and the fuel material diameter is 6.4 mm. The clad material is stainless steel. The side radial reflector: the inner layer of beryllium, and the outer layer of stainless steel. The top and bottom axial reflectors are of stainless steel. The analysis of the experimental results obtained from these benchmark experiments by developing detailed calculation models and performing simulations for the different experiments is presented. The sensitivity of the obtained results for the material specifications and the modeling details were examined. The analyses used the MCNP and MCU computer programs. This paper presents the experimental and analytical results. (authors)

  8. MCNP benchmark analyses of critical experiments for the Space Nuclear Thermal Propulsion program

    International Nuclear Information System (INIS)

    Selcow, E.C.; Cerbone, R.J.; Ludewig, H.; Mughabghab, S.F.; Schmidt, E.; Todosow, M.; Parma, E.J.; Ball, R.M.; Hoovler, G.S.

    1993-01-01

    Benchmark analyses have been performed of Particle Bed Reactor (PBR) critical experiments (CX) using the MCNP radiation transport code. The experiments have been conducted at the Sandia National Laboratory reactor facility in support of the Space Nuclear Thermal Propulsion (SNTP) program. The test reactor is a nineteen element water moderated and reflected thermal system. A series of integral experiments have been carried out to test the capabilities of the radiation transport codes to predict the performance of PBR systems. MCNP was selected as the preferred radiation analysis tool for the benchmark experiments. Comparison between experimental and calculational results indicate close agreement. This paper describes the analyses of benchmark experiments designed to quantify the accuracy of the MCNP radiation transport code for predicting the performance characteristics of PBR reactors

  9. MCNP benchmark analyses of critical experiments for the Space Nuclear Thermal Propulsion program

    Science.gov (United States)

    Selcow, Elizabeth C.; Cerbone, Ralph J.; Ludewig, Hans; Mughabghab, Said F.; Schmidt, Eldon; Todosow, Michael; Parma, Edward J.; Ball, Russell M.; Hoovler, Gary S.

    1993-01-01

    Benchmark analyses have been performed of Particle Bed Reactor (PBR) critical experiments (CX) using the MCNP radiation transport code. The experiments have been conducted at the Sandia National Laboratory reactor facility in support of the Space Nuclear Thermal Propulsion (SNTP) program. The test reactor is a nineteen element water moderated and reflected thermal system. A series of integral experiments have been carried out to test the capabilities of the radiation transport codes to predict the performance of PBR systems. MCNP was selected as the preferred radiation analysis tool for the benchmark experiments. Comparison between experimental and calculational results indicate close agreement. This paper describes the analyses of benchmark experiments designed to quantify the accuracy of the MCNP radiation transport code for predicting the performance characteristics of PBR reactors.

  10. Experimental and computational benchmark tests

    International Nuclear Information System (INIS)

    Gilliam, D.M.; Briesmeister, J.F.

    1994-01-01

    A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252 Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235 U, 239 Pu, 238 U, and 237 Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel

  11. RECENT ADDITIONS OF CRITICALITY SAFETY RELATED INTEGRAL BENCHMARK DATA TO THE ICSBEP AND IRPHEP HANDBOOKS

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Sartori

    2009-09-01

    High-quality integral benchmark experiments have always been a priority for criticality safety. However, interest in integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of future criticality safety needs to support next generation reactor and advanced fuel cycle concepts. The importance of drawing upon existing benchmark data is becoming more apparent because of dwindling availability of critical facilities worldwide and the high cost of performing new experiments. Integral benchmark data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the International Handbook of Reactor Physics Benchmark Experiments are widely used. Benchmark data have been added to these two handbooks since the last Nuclear Criticality Safety Division Topical Meeting in Knoxville, Tennessee (September 2005). This paper highlights these additions.

  12. Recent additions of criticality safety related integral benchmark data to the ICSBEP and IRPHEP handbooks

    International Nuclear Information System (INIS)

    Briggs, J. B.; Scott, L.; Rugama, Y.; Sartori, E.

    2009-01-01

    High-quality integral benchmark experiments have always been a priority for criticality safety. However, interest in integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of future criticality safety needs to support next generation reactor and advanced fuel cycle concepts. The importance of drawing upon existing benchmark data is becoming more apparent because of dwindling availability of critical facilities worldwide and the high cost of performing new experiments. Integral benchmark data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the International Handbook of Reactor Physics Benchmark Experiments are widely used. Benchmark data have been added to these two handbooks since the last Nuclear Criticality Safety Division Topical Meeting in Knoxville, Tennessee (September 2005). This paper highlights these additions. (authors)

  13. REcent Additions Of Criticality Safety Related Integral Benchmark Data To The Icsbep And Irphep Handbooks

    International Nuclear Information System (INIS)

    Briggs, J. Blair; Scott, Lori; Rugama, Yolanda; Sartori, Enrico

    2009-01-01

    High-quality integral benchmark experiments have always been a priority for criticality safety. However, interest in integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of future criticality safety needs to support next generation reactor and advanced fuel cycle concepts. The importance of drawing upon existing benchmark data is becoming more apparent because of dwindling availability of critical facilities worldwide and the high cost of performing new experiments. Integral benchmark data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the International Handbook of Reactor Physics Benchmark Experiments are widely used. Benchmark data have been added to these two handbooks since the last Nuclear Criticality Safety Division Topical Meeting in Knoxville, Tennessee (September 2005). This paper highlights these additions.

  14. Criticality benchmark comparisons leading to cross-section upgrades

    International Nuclear Information System (INIS)

    Alesso, H.P.; Annese, C.E.; Heinrichs, D.P.; Lloyd, W.R.; Lent, E.M.

    1993-01-01

    For several years criticality benchmark calculations with COG. COG is a point-wise Monte Carlo code developed at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The principle consideration in developing COG was that the resulting calculation would be as accurate as the point-wise cross-sectional data, since no physics computational approximations were used. The objective of this paper is to report on COG results for criticality benchmark experiments in concert with MCNP comparisons which are resulting in corrections an upgrades to the point-wise ENDL cross-section data libraries. Benchmarking discrepancies reported here indicated difficulties in the Evaluated Nuclear Data Livermore (ENDL) cross-sections for U-238 at thermal neutron energy levels. This led to a re-evaluation and selection of the appropriate cross-section values from several cross-section sets available (ENDL, ENDF/B-V). Further cross-section upgrades anticipated

  15. Benchmarking criticality analysis of TRIGA fuel storage racks.

    Science.gov (United States)

    Robinson, Matthew Loren; DeBey, Timothy M; Higginbotham, Jack F

    2017-01-01

    A criticality analysis was benchmarked to sub-criticality measurements of the hexagonal fuel storage racks at the United States Geological Survey TRIGA MARK I reactor in Denver. These racks, which hold up to 19 fuel elements each, are arranged at 0.61m (2 feet) spacings around the outer edge of the reactor. A 3-dimensional model was created of the racks using MCNP5, and the model was verified experimentally by comparison to measured subcritical multiplication data collected in an approach to critical loading of two of the racks. The validated model was then used to show that in the extreme condition where the entire circumference of the pool was lined with racks loaded with used fuel the storage array is subcritical with a k value of about 0.71; well below the regulatory limit of 0.8. A model was also constructed of the rectangular 2×10 fuel storage array used in many other TRIGA reactors to validate the technique against the original TRIGA licensing sub-critical analysis performed in 1966. The fuel used in this study was standard 20% enriched (LEU) aluminum or stainless steel clad TRIGA fuel. Copyright © 2016. Published by Elsevier Ltd.

  16. RELAP5/MOD2 benchmarking study: Critical heat flux under low-flow conditions

    International Nuclear Information System (INIS)

    Ruggles, E.; Williams, P.T.

    1990-01-01

    Experimental studies by Mishima and Ishii performed at Argonne National Laboratory and subsequent experimental studies performed by Mishima and Nishihara have investigated the critical heat flux (CHF) for low-pressure low-mass flux situations where low-quality burnout may occur. These flow situations are relevant to long-term decay heat removal after a loss of forced flow. The transition from burnout at high quality to burnout at low quality causes very low burnout heat flux values. Mishima and Ishii postulated a model for the low-quality burnout based on flow regime transition from churn turbulent to annular flow. This model was validated by both flow visualization and burnout measurements. Griffith et al. also studied CHF in low mass flux, low-pressure situations and correlated data for upflows, counter-current flows, and downflows with the local fluid conditions. A RELAP5/MOD2 CHF benchmarking study was carried out investigating the performance of the code for low-flow conditions. Data from the experimental study by Mishima and Ishii were the basis for the benchmark comparisons

  17. Criticality experiments to provide benchmark data on neutron flux traps

    International Nuclear Information System (INIS)

    Bierman, S.R.

    1988-06-01

    The experimental measurements covered by this report were designed to provide benchmark type data on water moderated LWR type fuel arrays containing neutron flux traps. The experiments were performed at the US Department of Energy Hanford Critical Mass Laboratory, operated by Pacific Northwest Laboratory. The experimental assemblies consisted of 2 /times/ 2 arrays of 4.31 wt % 235 U enriched UO 2 fuel rods, uniformly arranged in water on a 1.891 cm square center-to-center spacing. Neutron flux traps were created between the fuel units using metal plates containing varying amounts of boron. Measurements were made to determine the effect that boron loading and distance between the fuel and flux trap had on the amount of fuel required for criticality. Also, measurements were made, using the pulse neutron source technique, to determine the effect of boron loading on the effective neutron multiplications constant. On two assemblies, reaction rate measurements were made using solid state track recorders to determine absolute fission rates in 235 U and 238 U. 14 refs., 12 figs., 7 tabs

  18. Pool critical assembly pressure vessel facility benchmark

    International Nuclear Information System (INIS)

    Remec, I.; Kam, F.B.K.

    1997-07-01

    This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used

  19. Critical Assessment of Metagenome Interpretation – a benchmark of computational metagenomics software

    Science.gov (United States)

    Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter; Koslicki, David; Janssen, Stefan; Dröge, Johannes; Gregor, Ivan; Majda, Stephan; Fiedler, Jessika; Dahms, Eik; Bremges, Andreas; Fritz, Adrian; Garrido-Oter, Ruben; Jørgensen, Tue Sparholt; Shapiro, Nicole; Blood, Philip D.; Gurevich, Alexey; Bai, Yang; Turaev, Dmitrij; DeMaere, Matthew Z.; Chikhi, Rayan; Nagarajan, Niranjan; Quince, Christopher; Meyer, Fernando; Balvočiūtė, Monika; Hansen, Lars Hestbjerg; Sørensen, Søren J.; Chia, Burton K. H.; Denis, Bertrand; Froula, Jeff L.; Wang, Zhong; Egan, Robert; Kang, Dongwan Don; Cook, Jeffrey J.; Deltel, Charles; Beckstette, Michael; Lemaitre, Claire; Peterlongo, Pierre; Rizk, Guillaume; Lavenier, Dominique; Wu, Yu-Wei; Singer, Steven W.; Jain, Chirag; Strous, Marc; Klingenberg, Heiner; Meinicke, Peter; Barton, Michael; Lingner, Thomas; Lin, Hsin-Hung; Liao, Yu-Chieh; Silva, Genivaldo Gueiros Z.; Cuevas, Daniel A.; Edwards, Robert A.; Saha, Surya; Piro, Vitor C.; Renard, Bernhard Y.; Pop, Mihai; Klenk, Hans-Peter; Göker, Markus; Kyrpides, Nikos C.; Woyke, Tanja; Vorholt, Julia A.; Schulze-Lefert, Paul; Rubin, Edward M.; Darling, Aaron E.; Rattei, Thomas; McHardy, Alice C.

    2018-01-01

    In metagenome analysis, computational methods for assembly, taxonomic profiling and binning are key components facilitating downstream biological data interpretation. However, a lack of consensus about benchmarking datasets and evaluation metrics complicates proper performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on datasets of unprecedented complexity and realism. Benchmark metagenomes were generated from ~700 newly sequenced microorganisms and ~600 novel viruses and plasmids, including genomes with varying degrees of relatedness to each other and to publicly available ones and representing common experimental setups. Across all datasets, assembly and genome binning programs performed well for species represented by individual genomes, while performance was substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below the family level. Parameter settings substantially impacted performances, underscoring the importance of program reproducibility. While highlighting current challenges in computational metagenomics, the CAMI results provide a roadmap for software selection to answer specific research questions. PMID:28967888

  20. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  1. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    International Nuclear Information System (INIS)

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-01-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR-06 are highlighted, and the future of the two projects is discussed

  2. An improved benchmark model for the Big Ten critical assembly - 021

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    2010-01-01

    A new benchmark specification is developed for the BIG TEN uranium critical assembly. The assembly has a fast spectrum, and its core contains approximately 10 wt.% enriched uranium. Detailed specifications for the benchmark are provided, and results from the MCNP5 Monte Carlo code using a variety of nuclear-data libraries are given for this benchmark and two others. (authors)

  3. Calculational study of benchmark critical experiments on high-enriched uranyl nitrate solution systems

    International Nuclear Information System (INIS)

    Oh, I.; Rothe, R.E.

    1978-01-01

    Criticality calculations on minimally reflected, concrete-reflected, and plastic-reflected single tanks and on arrays of cylinders reflected by concrete and plastic have been performed using the KENO-IV code with 16-group Hansen-Roach neutron cross sections. The fissile material was high-enriched (93.17% 235 U) uranyl nitrate [UO 2 (NO 3 ) 2 ] solution. Calculated results are compared with those from a benchmark critical experiments program to provide the best possible verification of the calculational technique. The calculated k/sub eff/'s underestimate the critical condition by an average of 1.28% for the minimally reflected single tanks, 1.09% for the concrete-reflected single tanks, 0.60% for the plastic-reflected single tanks, 0.75% for the concrete-reflected arrays of cylinders, and 0.51% for the plastic-reflected arrays of cylinders. More than half of the present comparisons were within 1% of the experimental values, and the worst calculational and experimental discrepancy was 2.3% in k/sub eff/ for the KENO calculations

  4. Effects of neutron data libraries and criticality codes on IAEA criticality benchmark problems

    International Nuclear Information System (INIS)

    Sarker, Md.M.; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-10-01

    In order to compare the effects of neutron data libraries and criticality codes to thermal reactors (LWR), the IAEA criticality benchmark calculations have been performed. The experiments selected in this study include TRX-1 and TRX-2 with a simple geometric configuration. Reactor lattice calculation codes WIMS-D/4, MCNP-4, JACS (MGCL, KENO), and SRAC were used in the present calculations. The TRX cores were analyzed by WIMS-D/4 using WIMS original library and also by MCNP-4, JACS (MGCL, KENO), and SRAC using the libraries generated from JENDL-3 and ENDF/B-IV nuclear data files. An intercomparison work for the above mentioned code systems and cross section libraries was performed by analyzing the LWR benchmark experiments TRX-1 and TRX-2. The TRX cores were also analyzed for supercritical and subcritical conditions and these results were compared. In the case of critical condition, the results were in good agreement. But for the supercritical and subcritical conditions, the difference of the results obtained by using the different cross section libraries become larger than for the critical condition. (author)

  5. Monte Carlo code criticality benchmark comparisons for waste packaging

    International Nuclear Information System (INIS)

    Alesso, H.P.; Annese, C.E.; Buck, R.M.; Pearson, J.S.; Lloyd, W.R.

    1992-07-01

    COG is a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The objective of this paper is to report on COG results for criticality benchmark experiments both on a Cray mainframe and on a HP 9000 workstation. COG has been recently ported to workstations to improve its accessibility to a wider community of users. COG has some similarities to a number of other computer codes used in the shielding and criticality community. The recently introduced high performance reduced instruction set (RISC) UNIX workstations provide computational power that approach mainframes at a fraction of the cost. A version of COG is currently being developed for the Hewlett Packard 9000/730 computer with a UNIX operating system. Subsequent porting operations will move COG to SUN, DEC, and IBM workstations. In addition, a CAD system for preparation of the geometry input for COG is being developed. In July 1977, Babcock ampersand Wilcox Co. (B ampersand W) was awarded a contract to conduct a series of critical experiments that simulated close-packed storage of LWR-type fuel. These experiments provided data for benchmarking and validating calculational methods used in predicting K-effective of nuclear fuel storage in close-packed, neutron poisoned arrays. Low enriched UO2 fuel pins in water-moderated lattices in fuel storage represent a challenging criticality calculation for Monte Carlo codes particularly when the fuel pins extend out of the water. COG and KENO calculational results of these criticality benchmark experiments are presented

  6. Effects of existing evaluated nuclear data files on neutronics characteristics of the BFS-62-3A critical assembly benchmark model

    International Nuclear Information System (INIS)

    Semenov, Mikhail

    2002-11-01

    This report is continuation of studying of the experiments performed on BFS-62-3A critical assembly in Russia. The objective of work is definition of the cross section uncertainties on reactor neutronics parameters as applied to the hybrid core of the BN-600 reactor of Beloyarskaya NPP. Two-dimensional benchmark model of BFS-62-3A was created specially for these purposes and experimental values were reduced to it. Benchmark characteristics for this assembly are 1) criticality; 2) central fission rate ratios (spectral indices); and 3) fission rate distributions in stainless steel reflector. The effects of nuclear data libraries have been studied by comparing the results calculated using available modern data libraries - ENDF/B-V, ENDF/B-VI, ENDF/B-VI-PT, JENDL-3.2 and ABBN-93. All results were computed by Monte Carlo method with the continuous energy cross-sections. The checking of the cross sections of major isotopes on wide benchmark criticality collection was made. It was shown that ENDF/B-V data underestimate the criticality of fast reactor systems up to 2% Δk. As for the rest data, the difference between each other in criticality for BFS-62-3A is around 0.6% Δk. However, taking into account the results obtained for other fast reactor benchmarks (and steel-reflected also), it may conclude that the difference in criticality calculation results can achieve 1% Δk. This value is in a good agreement with cross section uncertainty evaluated for BN-600 hybrid core (±0.6% Δk). This work is related to the JNC-IPPE Collaboration on Experimental Investigation of Excess Weapons Grade Pu Disposition in BN-600 Reactor Using BFS-2 Facility. (author)

  7. The International Criticality Safety Benchmark Evaluation Project on the Internet

    International Nuclear Information System (INIS)

    Briggs, J.B.; Brennan, S.A.; Scott, L.

    2000-01-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in October 1992 by the US Department of Energy's (DOE's) defense programs and is documented in the Transactions of numerous American Nuclear Society and International Criticality Safety Conferences. The work of the ICSBEP is documented as an Organization for Economic Cooperation and Development (OECD) handbook, International Handbook of Evaluated Criticality Safety Benchmark Experiments. The ICSBEP Internet site was established in 1996 and its address is http://icsbep.inel.gov/icsbep. A copy of the ICSBEP home page is shown in Fig. 1. The ICSBEP Internet site contains the five primary links. Internal sublinks to other relevant sites are also provided within the ICSBEP Internet site. A brief description of each of the five primary ICSBEP Internet site links is given

  8. Criticality safety benchmark evaluation project: Recovering the past

    Energy Technology Data Exchange (ETDEWEB)

    Trumble, E.F.

    1997-06-01

    A very brief summary of the Criticality Safety Benchmark Evaluation Project of the Westinghouse Savannah River Company is provided in this paper. The purpose of the project is to provide a source of evaluated criticality safety experiments in an easily usable format. Another project goal is to search for any experiments that may have been lost or contain discrepancies, and to determine if they can be used. Results of evaluated experiments are being published as US DOE handbooks.

  9. Criticality benchmarks for COG: A new point-wise Monte Carlo code

    International Nuclear Information System (INIS)

    Alesso, H.P.; Pearson, J.; Choi, J.S.

    1989-01-01

    COG is a new point-wise Monte Carlo code being developed and tested at LLNL for the Cray computer. It solves the Boltzmann equation for the transport of neutrons, photons, and (in future versions) charged particles. Techniques included in the code for modifying the random walk of particles make COG most suitable for solving deep-penetration (shielding) problems. However, its point-wise cross-sections also make it effective for a wide variety of criticality problems. COG has some similarities to a number of other computer codes used in the shielding and criticality community. These include the Lawrence Livermore National Laboratory (LLNL) codes TART and ALICE, the Los Alamos National Laboratory code MCNP, the Oak Ridge National Laboratory codes 05R, 06R, KENO, and MORSE, the SACLAY code TRIPOLI, and the MAGI code SAM. Each code is a little different in its geometry input and its random-walk modification options. Validating COG consists in part of running benchmark calculations against critical experiments as well as other codes. The objective of this paper is to present calculational results of a variety of critical benchmark experiments using COG, and to present the resulting code bias. Numerous benchmark calculations have been completed for a wide variety of critical experiments which generally involve both simple and complex physical problems. The COG results, which they report in this paper, have been excellent

  10. Providing Nuclear Criticality Safety Analysis Education through Benchmark Experiment Evaluation

    International Nuclear Information System (INIS)

    Bess, John D.; Briggs, J. Blair; Nigg, David W.

    2009-01-01

    One of the challenges that today's new workforce of nuclear criticality safety engineers face is the opportunity to provide assessment of nuclear systems and establish safety guidelines without having received significant experience or hands-on training prior to graduation. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and/or the International Reactor Physics Experiment Evaluation Project (IRPhEP) provides students and young professionals the opportunity to gain experience and enhance critical engineering skills.

  11. Assessment CANDU physics codes using experimental data - part 1: criticality measurement

    International Nuclear Information System (INIS)

    Roh, Gyu Hong; Choi, Hang Bok; Jeong, Chang Joon

    2001-08-01

    In order to assess the applicability of MCNP-4B code to the heavy water moderated, light water cooled and pressure-tube type reactor, the MCNP-4B physics calculations has been carried out for the Deuterium Critical Assembly (DCA), and the results were compared with those of the experimental data. In this study, the key safety parameters like as the multiplication factor, void coefficient, local power peaking factor and bundle power distribution in the scattered core are simulated. In order to use the cross section data consistently for the fuels to be analyzed in the future, new MCNP libraries have been generated from ENDF/B-VI release 3. Generally, the MCNP-4B calculation results show a good agreement with experimental data of DCA core. After benchmarking MCNP-4B against available experimental data, it will be used as the reference tool to benchmark design and analysis codes for the advanced CANDU fuels

  12. Collection of experimental data for fusion neutronics benchmark

    International Nuclear Information System (INIS)

    Maekawa, Fujio; Yamamoto, Junji; Ichihara, Chihiro; Ueki, Kotaro; Ikeda, Yujiro.

    1994-02-01

    During the recent ten years or more, many benchmark experiments for fusion neutronics have been carried out at two principal D-T neutron sources, FNS at JAERI and OKTAVIAN at Osaka University, and precious experimental data have been accumulated. Under an activity of Fusion Reactor Physics Subcommittee of Reactor Physics Committee, these experimental data are compiled in this report. (author)

  13. Criticality Benchmark Results Using Various MCNP Data Libraries

    International Nuclear Information System (INIS)

    Frankle, Stephanie C.

    1999-01-01

    A suite of 86 criticality benchmarks has been recently implemented in MCNPtrademark as part of the nuclear data validation effort. These benchmarks have been run using two sets of MCNP continuous-energy neutron data: ENDF/B-VI based data through Release 2 (ENDF60) and the ENDF/B-V based data. New evaluations were completed for ENDF/B-VI for a number of the important nuclides such as the isotopes of H, Be, C, N, O, Fe, Ni, 235,238 U, 237 Np, and 239,240 Pu. When examining the results of these calculations for the five manor categories of 233 U, intermediate-enriched 235 U (IEU), highly enriched 235 U (HEU), 239 Pu, and mixed metal assembles, we find the following: (1) The new evaluations for 9 Be, 12 C, and 14 N show no net effect on k eff ; (2) There is a consistent decrease in k eff for all of the solution assemblies for ENDF/B-VI due to 1 H and 16 O, moving k eff further from the benchmark value for uranium solutions and closer to the benchmark value for plutonium solutions; (3) k eff decreased for the ENDF/B-VI Fe isotopic data, moving the calculated k eff further from the benchmark value; (4) k eff decreased for the ENDF/B-VI Ni isotopic data, moving the calculated k eff closer to the benchmark value; (5) The W data remained unchanged and tended to calculate slightly higher than the benchmark values; (6) For metal uranium systems, the ENDF/B-VI data for 235 U tends to decrease k eff while the 238 U data tends to increase k eff . The net result depends on the energy spectrum and material specifications for the particular assembly; (7) For more intermediate-energy systems, the changes in the 235,238 U evaluations tend to increase k eff . For the mixed graphite and normal uranium-reflected assembly, a large increase in k eff due to changes in the 238 U evaluation moved the calculated k eff much closer to the benchmark value. (8) There is little change in k eff for the uranium solutions due to the new 235,238 U evaluations; and (9) There is little change in k eff

  14. Assessment of the available {sup 233}U cross-section evaluations in the calculation of critical benchmark experiments

    Energy Technology Data Exchange (ETDEWEB)

    Leal, L.C.; Wright, R.Q.

    1996-10-01

    In this report we investigate the adequacy of the available {sup 233}U cross-section data for calculation of experimental critical systems. The {sup 233}U evaluations provided in two evaluated nuclear data libraries, the U.S. Data Bank [ENDF/B (Evaluated Nuclear Data Files)] and the Japanese Data Bank [JENDL (Japanese Evaluated Nuclear Data Library)] are examined. Calculations were performed for six thermal and ten fast experimental critical systems using the S{sub n} transport XSDRNPM code. To verify the performance of the {sup 233}U cross-section data for nuclear criticality safety application in which the neutron energy spectrum is predominantly in the epithermal energy range, calculations of four numerical benchmark systems with energy spectra in the intermediate energy range were done. These calculations serve only as an indication of the difference in calculated results that may be expected when the two {sup 233}U cross-section evaluations are used for problems with neutron spectra in the intermediate energy range. Additionally, comparisons of experimental and calculated central fission rate ratios were also made. The study has suggested that an ad hoc {sup 233}U evaluation based on the JENDL library provides better overall results for both fast and thermal experimental critical systems.

  15. Assessment of the Available (Sup 233)U Cross Sections Evaluations in the Calculation of Critical Benchmark Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Leal, L.C.

    1993-01-01

    In this report we investigate the adequacy of the available {sup 233}U cross-section data for calculation of experimental critical systems. The {sup 233}U evaluations provided in two evaluated nuclear data libraries, the U. S. Data Bank [ENDF/B (Evaluated Nuclear Data Files)] and the Japanese Data Bank [JENDL (Japanese Evaluated Nuclear Data Library)] are examined. Calculations were performed for six thermal and ten fast experimental critical systems using the Sn transport XSDRNPM code. To verify the performance of the {sup 233}U cross-section data for nuclear criticality safety application in which the neutron energy spectrum is predominantly in the epithermal energy range, calculations of four numerical benchmark systems with energy spectra in the intermediate energy range were done. These calculations serve only as an indication of the difference in calculated results that may be expected when the two {sup 233}U cross-section evaluations are used for problems with neutron spectra in the intermediate energy range. Additionally, comparisons of experimental and calculated central fission rate ratios were also made. The study has suggested that an ad hoc {sup 233}U evaluation based on the JENDL library provides better overall results for both fast and thermal experimental critical systems.

  16. Growth and Expansion of the International Criticality Safety Benchmark Evaluation Project and the Newly Organized International Reactor Physics Experiment Evaluation Project

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-05-01

    Agency (NEA) Nuclear Science Committee (NSC). The project was endorsed as an official activity of the NSC in June of 2003. The IRPhEP is patterned after its predecessor, the ICSBEP, but focuses on other integral measurements such as buckling, spectral characteristics, reactivity effects, reactivity coefficients, kinetics measurements, reaction-rate and power distributions, nuclide compositions and other miscellaneous types of measurements in addition to the critical configuration. The two projects are closely coordinated to avoid duplication of effort and to leverage limited resources to achieve a common goal. The purpose of the IRPhEP is to provide an extensively peer reviewed set of reactor physics related integral benchmark data that can be used by reactor designers and safety analysts to validate the analytical tools used to design next generation reactors and establish the safety basis for operation of these reactors. While coordination and administration of the IRPhEP takes place at an international level, each participating country is responsible for the administration, technical direction, and priorities of the project within their respective countries. The work of the IRPhEP is documented in an OECD NEA Handbook entitled, “International Handbook of Evaluated Reactor Physics Benchmark Experiments.” The first edition of this Handbook, the 2006 Edition spans over 2000 pages and contains data from 16 different experimental series that were

  17. A Critical Thinking Benchmark for a Department of Agricultural Education and Studies

    Science.gov (United States)

    Perry, Dustin K.; Retallick, Michael S.; Paulsen, Thomas H.

    2014-01-01

    Due to an ever changing world where technology seemingly provides endless answers, today's higher education students must master a new skill set reflecting an emphasis on critical thinking, problem solving, and communications. The purpose of this study was to establish a departmental benchmark for critical thinking abilities of students majoring…

  18. EBR-II Reactor Physics Benchmark Evaluation Report

    Energy Technology Data Exchange (ETDEWEB)

    Pope, Chad L. [Idaho State Univ., Pocatello, ID (United States); Lum, Edward S [Idaho State Univ., Pocatello, ID (United States); Stewart, Ryan [Idaho State Univ., Pocatello, ID (United States); Byambadorj, Bilguun [Idaho State Univ., Pocatello, ID (United States); Beaulieu, Quinton [Idaho State Univ., Pocatello, ID (United States)

    2017-12-28

    This report provides a reactor physics benchmark evaluation with associated uncertainty quantification for the critical configuration of the April 1986 Experimental Breeder Reactor II Run 138B core configuration.

  19. Computer simulation of Masurca critical and subcritical experiments. Muse-4 benchmark. Final report

    International Nuclear Information System (INIS)

    2006-01-01

    The efficient and safe management of spent fuel produced during the operation of commercial nuclear power plants is an important issue. In this context, partitioning and transmutation (P and T) of minor actinides and long-lived fission products can play an important role, significantly reducing the burden on geological repositories of nuclear waste and allowing their more effective use. Various systems, including existing reactors, fast reactors and advanced systems have been considered to optimise the transmutation scheme. Recently, many countries have shown interest in accelerator-driven systems (ADS) due to their potential for transmutation of minor actinides. Much R and D work is still required in order to demonstrate their desired capability as a whole system, and the current analysis methods and nuclear data for minor actinide burners are not as well established as those for conventionally-fuelled systems. Recognizing a need for code and data validation in this area, the Nuclear Science Committee of the OECD/NEA has organised various theoretical benchmarks on ADS burners. Many improvements and clarifications concerning nuclear data and calculation methods have been achieved. However, some significant discrepancies for important parameters are not fully understood and still require clarification. Therefore, this international benchmark based on MASURCA experiments, which were carried out under the auspices of the EC 5. Framework Programme, was launched in December 2001 in co-operation with the CEA (France) and CIEMAT (Spain). The benchmark model was oriented to compare simulation predictions based on available codes and nuclear data libraries with experimental data related to TRU transmutation, criticality constants and time evolution of the neutronic flux following source variation, within liquid metal fast subcritical systems. A total of 16 different institutions participated in this first experiment based benchmark, providing 34 solutions. The large number

  20. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  1. 2010 Criticality Accident Alarm System Benchmark Experiments At The CEA Valduc SILENE Facility

    International Nuclear Information System (INIS)

    Miller, Thomas Martin; Dunn, Michael E.; Wagner, John C.; McMahan, Kimberly L.; Authier, Nicolas; Jacquet, Xavier; Rousseau, Guillaume; Wolff, Herve; Piot, Jerome; Savanier, Laurence; Baclet, Nathalie; Lee, Yi-kang; Masse, Veronique; Trama, Jean-Christophe; Gagnier, Emmanuel; Naury, Sylvie; Lenain, Richard; Hunter, Richard; Kim, Soon; Dulik, George Michael; Reynolds, Kevin H.

    2011-01-01

    Several experiments were performed at the CEA Valduc SILENE reactor facility, which are intended to be published as evaluated benchmark experiments in the ICSBEP Handbook. These evaluated benchmarks will be useful for the verification and validation of radiation transport codes and evaluated nuclear data, particularly those that are used in the analysis of CAASs. During these experiments SILENE was operated in pulsed mode in order to be representative of a criticality accident, which is rare among shielding benchmarks. Measurements of the neutron flux were made with neutron activation foils and measurements of photon doses were made with TLDs. Also unique to these experiments was the presence of several detectors used in actual CAASs, which allowed for the observation of their behavior during an actual critical pulse. This paper presents the preliminary measurement data currently available from these experiments. Also presented are comparisons of preliminary computational results with Scale and TRIPOLI-4 to the preliminary measurement data.

  2. MCNP simulation of the TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Jeraj, R.; Glumac, B.; Maucec, M.

    1996-01-01

    The complete 3D MCNP model of the TRIGA Mark II reactor is presented. It enables precise calculations of some quantities of interest in a steady-state mode of operation. Calculational results are compared to the experimental results gathered during reactor reconstruction in 1992. Since the operating conditions were well defined at that time, the experimental results can be used as a benchmark. It may be noted that this benchmark is one of very few high enrichment benchmarks available. In our simulations experimental conditions were thoroughly simulated: fuel elements and control rods were precisely modeled as well as entire core configuration and the vicinity of the core. ENDF/B-VI and ENDF/B-V libraries were used. Partial results of benchmark calculations are presented. Excellent agreement of core criticality, excess reactivity and control rod worths can be observed. (author)

  3. NUPEC BWR Full-size Fine-mesh Bundle Test (BFBT) Benchmark. Volume II: uncertainty and sensitivity analyses of void distribution and critical power - Specification

    International Nuclear Information System (INIS)

    Aydogan, F.; Hochreiter, L.; Ivanov, K.; Martin, M.; Utsuno, H.; Sartori, E.

    2010-01-01

    This report provides the specification for the uncertainty exercises of the international OECD/NEA, NRC and NUPEC BFBT benchmark problem including the elemental task. The specification was prepared jointly by Pennsylvania State University (PSU), USA and the Japan Nuclear Energy Safety (JNES) Organisation, in cooperation with the OECD/NEA and the Commissariat a l'energie atomique (CEA Saclay, France). The work is sponsored by the US NRC, METI-Japan, the OECD/NEA and the Nuclear Engineering Program (NEP) of Pennsylvania State University. This uncertainty specification covers the fourth exercise of Phase I (Exercise-I-4), and the third exercise of Phase II (Exercise II-3) as well as the elemental task. The OECD/NRC BFBT benchmark provides a very good opportunity to apply uncertainty analysis (UA) and sensitivity analysis (SA) techniques and to assess the accuracy of thermal-hydraulic models for two-phase flows in rod bundles. During the previous OECD benchmarks, participants usually carried out sensitivity analysis on their models for the specification (initial conditions, boundary conditions, etc.) to identify the most sensitive models or/and to improve the computed results. The comprehensive BFBT experimental database (NEA, 2006) leads us one step further in investigating modelling capabilities by taking into account the uncertainty analysis in the benchmark. The uncertainties in input data (boundary conditions) and geometry (provided in the benchmark specification) as well as the uncertainties in code models can be accounted for to produce results with calculational uncertainties and compare them with the measurement uncertainties. Therefore, uncertainty analysis exercises were defined for the void distribution and critical power phases of the BFBT benchmark. This specification is intended to provide definitions related to UA/SA methods, sensitivity/ uncertainty parameters, suggested probability distribution functions (PDF) of sensitivity parameters, and selected

  4. Links among available integral benchmarks and differential date evaluations, computational biases and uncertainties, and nuclear criticality safety biases on potential MOX production throughput

    International Nuclear Information System (INIS)

    Goluoglu, S.; Hopper, C.M.

    2004-01-01

    Through the use of Oak Ridge National Laboratory's recently developed and applied sensitivity and uncertainty computational analysis techniques, this paper presents the relevance and importance of available and needed integral benchmarks and differential data evaluations impacting potential MOX production throughput determinations relative to low-moderated MOX fuel blending operations. The relevance and importance in the availability of or need for critical experiment benchmarks and data evaluations are presented in terms of computational biases as influenced by computational and experimental sensitivities and uncertainties relative to selected MOX production powder blending processes. Recent developments for estimating the safe margins of subcriticality for assuring nuclear criticality safety for process approval are presented. In addition, the impact of the safe margins (due to computational biases and uncertainties) on potential MOX production throughput will also be presented. (author)

  5. Influence of the ab initio n–d cross sections in the critical heavy-water benchmarks

    International Nuclear Information System (INIS)

    Morillon, B.; Lazauskas, R.; Carbonell, J.

    2013-01-01

    Highlights: ► We solve the three nucleon problem using different NN potential (MT, AV18 and INOY) to calculate the Neutron–deuteron cross sections. ► These cross sections are compared to the existing experimental data and to international libraries. ► We describe the different sets of heavy water benchmarks for which the Monte Carlo simulations have been performed including our new Neutron–deuteron cross sections. ► The results obtained by the ab initio INOY potential have been compared with the calculations based on the international library cross sections and are found to be of the same quality. - Abstract: The n–d elastic and breakup cross sections are computed by solving the three-body Faddeev equations for realistic and semi-realistic nucleon–nucleon potentials. These cross sections are inserted in the Monte Carlo simulation of the nuclear processes considered in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook). The results obtained using thes ab initio n–d cross sections are compared with those provided by the most renown international libraries

  6. Classification of criticality calculations with correlation coefficient method and its application to OECD/NEA burnup credit benchmarks phase III-A and II-A

    International Nuclear Information System (INIS)

    Okuno, Hiroshi

    2003-01-01

    A method for classifying benchmark results of criticality calculations according to similarity was proposed in this paper. After formulation of the method utilizing correlation coefficients, it was applied to burnup credit criticality benchmarks Phase III-A and II-A, which were conducted by the Expert Group on Burnup Credit Criticality Safety under auspices of the Nuclear Energy Agency of the Organisation for Economic Cooperation and Development (OECD/NEA). Phase III-A benchmark was a series of criticality calculations for irradiated Boiling Water Reactor (BWR) fuel assemblies, whereas Phase II-A benchmark was a suite of criticality calculations for irradiated Pressurized Water Reactor (PWR) fuel pins. These benchmark problems and their results were summarized. The correlation coefficients were calculated and sets of benchmark calculation results were classified according to the criterion that the values of the correlation coefficients were no less than 0.15 for Phase III-A and 0.10 for Phase II-A benchmarks. When a couple of benchmark calculation results belonged to the same group, one calculation result was found predictable from the other. An example was shown for each of the Benchmarks. While the evaluated nuclear data seemed the main factor for the classification, further investigations were required for finding other factors. (author)

  7. Validation of the Continuous-Energy Monte Carlo Criticality-Safety Analysis System MVP and JENDL-3.2 Using the Internationally Evaluated Criticality Benchmarks

    International Nuclear Information System (INIS)

    Mitake, Susumu

    2003-01-01

    Validation of the continuous-energy Monte Carlo criticality-safety analysis system, comprising the MVP code and neutron cross sections based on JENDL-3.2, was examined using benchmarks evaluated in the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments'. Eight experiments (116 configurations) for the plutonium solution and plutonium-uranium mixture systems performed at Valduc, Battelle Pacific Northwest Laboratories, and other facilities were selected and used in the studies. The averaged multiplication factors calculated with MVP and MCNP-4B using the same neutron cross-section libraries based on JENDL-3.2 were in good agreement. Based on methods provided in the Japanese nuclear criticality-safety handbook, the estimated criticality lower-limit multiplication factors to be used as a subcriticality criterion for the criticality-safety evaluation of nuclear facilities were obtained. The analysis proved the applicability of the MVP code to the criticality-safety analysis of nuclear fuel facilities, particularly to the analysis of systems fueled with plutonium and in homogeneous and thermal-energy conditions

  8. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  9. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  10. Studies of thermal-reactor benchmark-data interpretation: experimental corrections

    International Nuclear Information System (INIS)

    Sher, R.; Fiarman, S.

    1976-10-01

    Experimental values of integral parameters of the lattices studied in this report, i.e., the MIT(D 2 O) and TRX benchmark lattices have been re-examined and revised. The revisions correct several systematic errors that have been previously ignored or considered insignificant. These systematic errors are discussed in detail. The final corrected values are presented

  11. Performance assessment of new neutron cross section libraries using MCNP code and some critical benchmarks

    International Nuclear Information System (INIS)

    Bakkari, B El; Bardouni, T El.; Erradi, L.; Chakir, E.; Meroun, O.; Azahra, M.; Boukhal, H.; Khoukhi, T El.; Htet, A.

    2007-01-01

    Full text: New releases of nuclear data files made available during the few recent years. The reference MCNP5 code (1) for Monte Carlo calculations is usually distributed with only one standard nuclear data library for neutron interactions based on ENDF/B-VI. The main goal of this work is to process new neutron cross sections libraries in ACE continuous format for MCNP code based on the most recent data files recently made available for the scientific community : ENDF/B-VII.b2, ENDF/B-VI (release 8), JEFF3.0, JEFF-3.1, JENDL-3.3 and JEF2.2. In our data treatment, we used the modular NJOY system (release 99.9) (2) in conjunction with its most recent upadates. Assessment of the processed point wise cross sections libraries performances was made by means of some criticality prediction and analysis of other integral parameters for a set of reactor benchmarks. Almost all the analyzed benchmarks were taken from the international handbook of Evaluated criticality safety benchmarks experiments from OECD (3). Some revised benchmarks were taken from references (4,5). These benchmarks use Pu-239 or U-235 as the main fissionable materiel in different forms, different enrichments and cover various geometries. Monte Carlo calculations were performed in 3D with maximum details of benchmark description and the S(α,β) cross section treatment was adopted in all thermal cases. The resulting one standard deviation confidence interval for the eigenvalue is typically +/-13% to +/-20 pcm [fr

  12. OECD/NEA burnup credit calculational criticality benchmark Phase I-B results

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, M.D.; Parks, C.V. [Oak Ridge National Lab., TN (United States); Brady, M.C. [Sandia National Labs., Las Vegas, NV (United States)

    1996-06-01

    In most countries, criticality analysis of LWR fuel stored in racks and casks has assumed that the fuel is fresh with the maximum allowable initial enrichment. This assumption has led to the design of widely spaced and/or highly poisoned storage and transport arrays. If credit is assumed for fuel burnup, initial enrichment limitations can be raised in existing systems, and more compact and economical arrays can be designed. Such reliance on the reduced reactivity of spent fuel for criticality control is referred to as burnup credit. The Burnup Credit Working Group, formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development, has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods agree to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods agree within 11% about the average for all fission products studied. Most deviations are less than 10%, and many are less than 5%. The exceptions are Sm 149, Sm 151, and Gd 155.

  13. OECD/NEA burnup credit calculational criticality benchmark Phase I-B results

    International Nuclear Information System (INIS)

    DeHart, M.D.; Parks, C.V.; Brady, M.C.

    1996-06-01

    In most countries, criticality analysis of LWR fuel stored in racks and casks has assumed that the fuel is fresh with the maximum allowable initial enrichment. This assumption has led to the design of widely spaced and/or highly poisoned storage and transport arrays. If credit is assumed for fuel burnup, initial enrichment limitations can be raised in existing systems, and more compact and economical arrays can be designed. Such reliance on the reduced reactivity of spent fuel for criticality control is referred to as burnup credit. The Burnup Credit Working Group, formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development, has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods agree to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods agree within 11% about the average for all fission products studied. Most deviations are less than 10%, and many are less than 5%. The exceptions are Sm 149, Sm 151, and Gd 155

  14. Criticality benchmark guide for light-water-reactor fuel in transportation and storage packages

    International Nuclear Information System (INIS)

    Lichtenwalter, J.J.; Bowman, S.M.; DeHart, M.D.; Hopper, C.M.

    1997-03-01

    This report is designed as a guide for performing criticality benchmark calculations for light-water-reactor (LWR) fuel applications. The guide provides documentation of 180 criticality experiments with geometries, materials, and neutron interaction characteristics representative of transportation packages containing LWR fuel or uranium oxide pellets or powder. These experiments should benefit the U.S. Nuclear Regulatory Commission (NRC) staff and licensees in validation of computational methods used in LWR fuel storage and transportation concerns. The experiments are classified by key parameters such as enrichment, water/fuel volume, hydrogen-to-fissile ratio (H/X), and lattice pitch. Groups of experiments with common features such as separator plates, shielding walls, and soluble boron are also identified. In addition, a sample validation using these experiments and a statistical analysis of the results are provided. Recommendations for selecting suitable experiments and determination of calculational bias and uncertainty are presented as part of this benchmark guide

  15. Comparisons of the MCNP criticality benchmark suite with ENDF/B-VI.8, JENDL-3.3, and JEFF-3.0

    International Nuclear Information System (INIS)

    Kim, Do Heon; Gil, Choong-Sup; Kim, Jung-Do; Chang, Jonghwa

    2003-01-01

    A comparative study has been performed with the latest evaluated nuclear data libraries ENDF/B-VI.8, JENDL-3.3, and JEFF-3.0. The study has been conducted through the benchmark calculations for 91 criticality problems with the libraries processed for MCNP4C. The calculation results have been compared with those of the ENDF60 library. The self-shielding effects of the unresolved-resonance (UR) probability tables have also been estimated for each library. The χ 2 differences between the MCNP results and experimental data were calculated for the libraries. (author)

  16. Benchmark test of evaluated nuclear data files for fast reactor neutronics application

    International Nuclear Information System (INIS)

    Chiba, Go; Hazama, Taira; Iwai, Takehiko; Numata, Kazuyuki

    2007-07-01

    A benchmark test of the latest evaluated nuclear data files, JENDL-3.3, JEFF-3.1 and ENDF/B-VII.0, has been carried out for fast reactor neutronics application. For this benchmark test, experimental data obtained at fast critical assemblies and fast power reactors are utilized. In addition to comparing of numerical solutions with the experimental data, we have extracted several cross sections, in which differences between three nuclear data files affect significantly numerical solutions, by virtue of sensitivity analyses. This benchmark test concludes that ENDF/B-VII.0 predicts well the neutronics characteristics of fast neutron systems rather than the other nuclear data files. (author)

  17. Modernization at the Y-12 National Security Complex: A Case for Additional Experimental Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Thornbury, M. L. [Y-12 National Security Complex, Oak Ridge, TN (United States); Juarez, C. [Y-12 National Security Complex, Oak Ridge, TN (United States); Krass, A. W. [Y-12 National Security Complex, Oak Ridge, TN (United States)

    2017-08-14

    Efforts are underway at the Y-12 National Security Complex (Y-12) to modernize the recovery, purification, and consolidation of un-irradiated, highly enriched uranium metal. Successful integration of advanced technology such as Electrorefining (ER) eliminates many of the intermediate chemistry systems and processes that are the current and historical basis of the nuclear fuel cycle at Y-12. The cost of operations, the inventory of hazardous chemicals, and the volume of waste are significantly reduced by ER. It also introduces unique material forms and compositions related to the chemistry of chloride salts for further consideration in safety analysis and engineering. The work herein briefly describes recent investigations of nuclear criticality for 235UO2Cl2 (uranyl chloride) and 6LiCl (lithium chloride) in aqueous solution. Of particular interest is the minimum critical mass of highly enriched uranium as a function of the molar ratio of 6Li to 235U. The work herein also briefly describes recent investigations of nuclear criticality for 235U metal reflected by salt mixtures of 6LiCl or 7LiCl (lithium chloride), KCl (potassium chloride), and 235UCl3 or 238UCl3 (uranium tri-chloride). Computational methods for analysis of nuclear criticality safety and published nuclear data are employed in the absence of directly relevant experimental criticality benchmarks.

  18. MCNP calculations for criticality-safety benchmarks with ENDF/B-V and ENDF/B-VI libraries

    International Nuclear Information System (INIS)

    Iverson, J.L.; Mosteller, R.D.

    1995-01-01

    The MCNP Monte Carlo code, in conjunction with its continuous-energy ENDF/B-V and ENDF/B-VI cross-section libraries, has been benchmarked against results from 27 different critical experiments. The predicted values of k eff are in excellent agreement with the benchmarks, except for the ENDF/B-V results for solutions of plutonium nitrate and, to a lesser degree, for the ENDF/B-V and ENDF/B-VI results for a bare sphere of 233 U

  19. Current Reactor Physics Benchmark Activities at the Idaho National Laboratory

    International Nuclear Information System (INIS)

    Bess, John D.; Marshall, Margaret A.; Gorham, Mackenzie L.; Christensen, Joseph; Turnbull, James C.; Clark, Kim

    2011-01-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) (1) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) (2) were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.

  20. Regional Competitive Intelligence: Benchmarking and Policymaking

    OpenAIRE

    Huggins , Robert

    2010-01-01

    Benchmarking exercises have become increasingly popular within the sphere of regional policymaking in recent years. The aim of this paper is to analyse the concept of regional benchmarking and its links with regional policymaking processes. It develops a typology of regional benchmarking exercises and regional benchmarkers, and critically reviews the literature, both academic and policy oriented. It is argued that critics who suggest regional benchmarking is a flawed concept and technique fai...

  1. Growth and Expansion of the International Criticality Safety Benchmark Evaluation Project and the Newly Organized International Reactor Physics Experiment Evaluation Project

    International Nuclear Information System (INIS)

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-01-01

    Since ICNC 2003, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) has continued to expand its efforts and broaden its scope. Criticality-alarm/shielding type benchmarks and fundamental physics measurements that are relevant to criticality safety applications are not only included in the scope of the project, but benchmark data are also included in the latest version of the handbook. A considerable number of improvements have been made to the searchable database, DICE and the criticality-alarm/shielding benchmarks and fundamental physics measurements have been included in the database. There were 12 countries participating on the ICSBEP in 2003. That number has increased to 18 with recent contributions of data and/or resources from Brazil, Czech Republic, Poland, India, Canada, and China. South Africa, Germany, Argentina, and Australia have been invited to participate. Since ICNC 2003, the contents of the ''International Handbook of Evaluated Criticality Safety Benchmark Experiments'' have increased from 350 evaluations (28,000 pages) containing benchmark specifications for 3070 critical or subcritical configurations to 442 evaluations (over 38,000 pages) containing benchmark specifications for 3957 critical or subcritical configurations, 23 criticality-alarm-placement/shielding configurations with multiple dose points for each, and 20 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications in the 2006 Edition of the ICSBEP Handbook. Approximately 30 new evaluations and 250 additional configurations are expected to be added to the 2007 Edition of the Handbook. Since ICNC 2003, a reactor physics counterpart to the ICSBEP, The International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. Beginning in 1999, the IRPhEP was conducted as a pilot activity by the by the Organization of Economic Cooperation and Development (OECD) Nuclear Energy Agency

  2. RB reactor as the U-D2O benchmark criticality system

    International Nuclear Information System (INIS)

    Pesic, M.

    1998-01-01

    From a rich and valuable database fro 580 different reactor cores formed up to now in the RB nuclear reactor, a selected and well recorded set is carefully chosen and preliminarily proposed as a new uranium-heavy water benchmark criticality system for validation od reactor design computer codes and data libraries. The first results of validation of the MCNP code and adjoining neutron cross section libraries are resented in this paper. (author)

  3. Nuclear criticality information system

    International Nuclear Information System (INIS)

    Koponen, B.L.; Hampel, V.E.

    1981-01-01

    The nuclear criticality safety program at LLNL began in the 1950's with a critical measurements program which produced benchmark data until the late 1960's. This same time period saw the rapid development of computer technology useful for both computer modeling of fissile systems and for computer-aided management and display of the computational benchmark data. Database management grew in importance as the amount of information increased and as experimental programs were terminated. Within the criticality safety program at LLNL we began at that time to develop a computer library of benchmark data for validation of computer codes and cross sections. As part of this effort, we prepared a computer-based bibliography of criticality measurements on relatively simple systems. However, it is only now that some of these computer-based resources can be made available to the nuclear criticality safety community at large. This technology transfer is being accomplished by the DOE Technology Information System (TIS), a dedicated, advanced information system. The NCIS database is described

  4. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kahler, A. [Los Alamos National Laboratory (LANL); Macfarlane, R E [Los Alamos National Laboratory (LANL); Mosteller, R D [Los Alamos National Laboratory (LANL); Kiedrowski, B C [Los Alamos National Laboratory (LANL); Frankle, S C [Los Alamos National Laboratory (LANL); Chadwick, M. B. [Los Alamos National Laboratory (LANL); Mcknight, R D [Argonne National Laboratory (ANL); Lell, R M [Argonne National Laboratory (ANL); Palmiotti, G [Idaho National Laboratory (INL); Hiruta, h [Idaho National Laboratory (INL); Herman, Micheal W [Brookhaven National Laboratory (BNL); Arcilla, r [Brookhaven National Laboratory (BNL); Mughabghab, S F [Brookhaven National Laboratory (BNL); Sublet, J C [Culham Science Center, Abington, UK; Trkov, A. [Jozef Stefan Institute, Slovenia; Trumbull, T H [Knolls Atomic Power Laboratory; Dunn, Michael E [ORNL

    2011-01-01

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [1]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unrnoderated and uranium reflected (235)U and (239)Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as (236)U; (238,242)Pu and (241,243)Am capture in fast systems. Other deficiencies, such as the overprediction of Pu solution system critical

  5. Status on benchmark testing of CENDL-3

    CERN Document Server

    Liu Ping

    2002-01-01

    CENDL-3, the newest version of China Evaluated Nuclear Data Library has been finished, and distributed for some benchmarks analysis recently. The processing was carried out using the NJOY nuclear data processing code system. The calculations and analysis of benchmarks were done with Monte Carlo code MCNP and reactor lattice code WIMSD5A. The calculated results were compared with the experimental results based on ENDF/B6. In most thermal and fast uranium criticality benchmarks, the calculated k sub e sub f sub f values with CENDL-3 were in good agreements with experimental results. In the plutonium fast cores, the k sub e sub f sub f values were improved significantly with CENDL-3. This is duo to reevaluation of the fission spectrum and elastic angular distributions of sup 2 sup 3 sup 9 Pu and sup 2 sup 4 sup 0 Pu. CENDL-3 underestimated the k sub e sub f sub f values compared with other evaluated data libraries for most spherical or cylindrical assemblies of plutonium or uranium with beryllium

  6. Analysis on First Criticality Benchmark Calculation of HTR-10 Core

    International Nuclear Information System (INIS)

    Zuhair; Ferhat-Aziz; As-Natio-Lasman

    2000-01-01

    HTR-10 is a graphite-moderated and helium-gas cooled pebble bed reactor with an average helium outlet temperature of 700 o C and thermal power of 10 MW. The first criticality benchmark problem of HTR-10 in this paper includes the loading number calculation of nuclear fuel in the form of UO 2 ball with U-235 enrichment of 17% for the first criticality under the helium atmosphere and core temperature of 20 o C, and the effective multiplication factor (k eff ) calculation of full core (5 m 3 ) under the helium atmosphere and various core temperatures. The group constants of fuel mixture, moderator and reflector materials were generated with WlMS/D4 using spherical model and 4 neutron energy group. The critical core height of 150.1 cm obtained from CITATION in 2-D R-Z reactor geometry exists in the calculation range of INET China, JAERI Japan and BATAN Indonesia, and OKBM Russia. The k eff calculation result of full core at various temperatures shows that the HTR-10 has negative temperature coefficient of reactivity. (author)

  7. Benchmarking ENDF/B-VII.0

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2006-01-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2 O, H 2 O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  8. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kahler, A.C.; Herman, M.; Kahler,A.C.; MacFarlane,R.E.; Mosteller,R.D.; Kiedrowski,B.C.; Frankle,S.C.; Chadwick,M.B.; McKnight,R.D.; Lell,R.M.; Palmiotti,G.; Hiruta,H.; Herman,M.; Arcilla,R.; Mughabghab,S.F.; Sublet,J.C.; Trkov,A.; Trumbull,T.H.; Dunn,M.

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., 'ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data,' Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected {sup 235}U and {sup 239}Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also

  9. Specifications, Pre-Experimental Predictions, and Test Plate Characterization Information for the Prometheus Critical Experiments

    International Nuclear Information System (INIS)

    ML Zerkle; ME Meyers; SM Tarves; JJ Powers

    2006-01-01

    This report provides specifications, pre-experimental predictions, and test plate characterization information for a series of molybdenum (Mo), niobium (Nb), rhenium (Re), tantalum (Ta), and baseline critical experiments that were developed by the Naval Reactors Prime Contractor Team (NRPCT) for the Prometheus space reactor development project. In March 2004, the Naval Reactors program was assigned the responsibility to develop, design, deliver, and operationally support civilian space nuclear reactors for NASA's Project Prometheus. The NRPCT was formed to perform this work and consisted of engineers and scientists from the Naval Reactors (NR) Program prime contractors: Bettis Atomic Power Laboratory, Knolls Atomic Power Laboratory (KAPL), and Bechtel Plant Machinery Inc (BPMI). The NRPCT developed a series of clean benchmark critical experiments to address fundamental uncertainties in the neutron cross section data for Mo, Nb, Re, and Ta in fast, intermediate, and mixed neutron energy spectra. These experiments were to be performed by Los Alamos National Laboratory (LANL) using the Planet vertical lift critical assembly machine and were designed with a simple, geometrically clean, cylindrical configuration consisting of alternating layers of test, moderator/reflector, and fuel materials. Based on reprioritization of missions and funding within NASA, Naval Reactors and NASA discontinued their collaboration on Project Prometheus in September 2005. One critical experiment and eighteen subcritical handstacking experiments were completed prior to the termination of work in September 2005. Information on the Prometheus critical experiments and the test plates produced for these experiments are expected to be of value to future space reactor development programs and to integral experiments designed to address the fundamental neutron cross section uncertainties for these refractory metals. This information is being provided as an orderly closeout of NRPCT work on Project

  10. Criticality reference benchmark calculations for burnup credit using spent fuel isotopics

    International Nuclear Information System (INIS)

    Bowman, S.M.

    1991-04-01

    To date, criticality analyses performed in support of the certification of spent fuel casks in the United States do not take credit for the reactivity reduction that results from burnup. By taking credit for the fuel burnup, commonly referred to as ''burnup credit,'' the fuel loading capacity of these casks can be increased. One of the difficulties in implementing burnup credit in criticality analyses is that there have been no critical experiments performed with spent fuel which can be used for computer code validation. In lieu of that, a reference problem set of fresh fuel critical experiments which model various conditions typical of light water reactor (LWR) transportation and storage casks has been identified and used in the validation of SCALE-4. This report documents the use of this same problem set to perform spent fuel criticality benchmark calculations by replacing the actual fresh fuel isotopics from the experiments with six different sets of calculated spent fuel isotopics. The SCALE-4 modules SAS2H and CSAS4 were used to perform the analyses. These calculations do not model actual critical experiments. The calculated k-effectives are not supposed to equal unity and will vary depending on the initial enrichment and burnup of the calculated spent fuel isotopics. 12 refs., 11 tabs

  11. Nuclear criticality predictability

    International Nuclear Information System (INIS)

    Briggs, J.B.

    1999-01-01

    As a result of lots of efforts, a large portion of the tedious and redundant research and processing of critical experiment data has been eliminated. The necessary step in criticality safety analyses of validating computer codes with benchmark critical data is greatly streamlined, and valuable criticality safety experimental data is preserved. Criticality safety personnel in 31 different countries are now using the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments'. Much has been accomplished by the work of the ICSBEP. However, evaluation and documentation represents only one element of a successful Nuclear Criticality Safety Predictability Program and this element only exists as a separate entity, because this work was not completed in conjunction with the experimentation process. I believe; however, that the work of the ICSBEP has also served to unify the other elements of nuclear criticality predictability. All elements are interrelated, but for a time it seemed that communications between these elements was not adequate. The ICSBEP has highlighted gaps in data, has retrieved lost data, has helped to identify errors in cross section processing codes, and has helped bring the international criticality safety community together in a common cause as true friends and colleagues. It has been a privilege to associate with those who work so diligently to make the project a success. (J.P.N.)

  12. Experimental benchmarking of quantum control in zero-field nuclear magnetic resonance.

    Science.gov (United States)

    Jiang, Min; Wu, Teng; Blanchard, John W; Feng, Guanru; Peng, Xinhua; Budker, Dmitry

    2018-06-01

    Demonstration of coherent control and characterization of the control fidelity is important for the development of quantum architectures such as nuclear magnetic resonance (NMR). We introduce an experimental approach to realize universal quantum control, and benchmarking thereof, in zero-field NMR, an analog of conventional high-field NMR that features less-constrained spin dynamics. We design a composite pulse technique for both arbitrary one-spin rotations and a two-spin controlled-not (CNOT) gate in a heteronuclear two-spin system at zero field, which experimentally demonstrates universal quantum control in such a system. Moreover, using quantum information-inspired randomized benchmarking and partial quantum process tomography, we evaluate the quality of the control, achieving single-spin control for 13 C with an average fidelity of 0.9960(2) and two-spin control via a CNOT gate with a fidelity of 0.9877(2). Our method can also be extended to more general multispin heteronuclear systems at zero field. The realization of universal quantum control in zero-field NMR is important for quantum state/coherence preparation, pulse sequence design, and is an essential step toward applications to materials science, chemical analysis, and fundamental physics.

  13. OECD/NEA burnup credit criticality benchmarks phase IIIA: Criticality calculations of BWR spent fuel assemblies in storage and transport

    Energy Technology Data Exchange (ETDEWEB)

    Okuno, Hiroshi; Naito, Yoshitaka [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ando, Yoshihira [Toshiba Corp., Kawasaki, Kanagawa (Japan)

    2000-09-01

    The report describes the final results of Phase IIIA Benchmarks conducted by the Burnup Credit Criticality Calculation Working Group under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD/NEA). The benchmarks are intended to confirm the predictive capability of the current computer code and data library combinations for the neutron multiplication factor (k{sub eff}) of a layer of irradiated BWR fuel assembly array model. In total 22 benchmark problems are proposed for calculations of k{sub eff}. The effects of following parameters are investigated: cooling time, inclusion/exclusion of FP nuclides and axial burnup profile, and inclusion of axial profile of void fraction or constant void fractions during burnup. Axial profiles of fractional fission rates are further requested for five cases out of the 22 problems. Twenty-one sets of results are presented, contributed by 17 institutes from 9 countries. The relative dispersion of k{sub eff} values calculated by the participants from the mean value is almost within the band of {+-}1%{delta}k/k. The deviations from the averaged calculated fission rate profiles are found to be within {+-}5% for most cases. (author)

  14. FENDL neutronics benchmark: Specifications for the calculational neutronics and shielding benchmark

    International Nuclear Information System (INIS)

    Sawan, M.E.

    1994-12-01

    During the IAEA Advisory Group Meeting on ''Improved Evaluations and Integral Data Testing for FENDL'' held in Garching near Munich, Germany in the period 12-16 September 1994, the Working Group II on ''Experimental and Calculational Benchmarks on Fusion Neutronics for ITER'' recommended that a calculational benchmark representative of the ITER design should be developed. This report describes the neutronics and shielding calculational benchmark available for scientists interested in performing analysis for this benchmark. (author)

  15. The OECD/NRC BWR full-size fine-mesh bundle tests benchmark (BFBT)-general description

    International Nuclear Information System (INIS)

    Sartori, Enrico; Hochreiter, L.E.; Ivanov, Kostadin; Utsuno, Hideaki

    2004-01-01

    The need to refine models for best-estimate calculations based on good-quality experimental data have been expressed in many recent meetings in the field of nuclear applications. The needs arising in this respect should not be limited to currently available macroscopic approaches but should be extended to next-generation approaches that focus on more microscopic processes. One most valuable database identified for the thermal-hydraulics modelling was developed by the Nuclear Power Engineering Corporation (NUPEC). Part of this database will be made available for an international benchmark exercise. This fine-mesh high-quality data encourages advancement in the insufficiently developed field of the two-phase flow theory. Considering that the present theoretical approach is relatively immature, the benchmark specification is designed so that it will systematically assess and compare the participants' numerical models on the prediction of detailed void distributions and critical powers. The development of truly mechanistic models for critical power prediction is currently underway. These innovative models should include elementary processes such as void distributions, droplet deposit, liquid film entrainment, etc. The benchmark problem includes both macroscopic and microscopic measurement data. In this context, the sub-channel grade void fraction data are regarded as the macroscopic data, and the digitized computer graphic images are the microscopic data. The proposed benchmark consists of two parts (phases), each part consisting of different exercises: Phase 1- Void distribution benchmark: Exercise 1- Steady-state sub-channel grade benchmark. Exercise 2- Steady-state microscopic grade benchmark. Exercise 3-Transient macroscopic grade benchmark. Phase 2-Critical power benchmark: Exercise 1-Steady-state benchmark. Exercise 2-Transient benchmark. (author)

  16. Burn-up Credit Criticality Safety Benchmark Phase III-C. Nuclide Composition and Neutron Multiplication Factor of a Boiling Water Reactor Spent Fuel Assembly for Burn-up Credit and Criticality Control of Damaged Nuclear Fuel

    International Nuclear Information System (INIS)

    Suyama, K.; Uchida, Y.; Kashima, T.; Ito, T.; Miyaji, T.

    2016-01-01

    longer process time (CPU) is required. Treatment of the gadolinium rod is still a key issue. The difference of the neutron multiplication factor generated by the burn-up calculation results was confirmed by the analysis using the same criticality calculation code, MVP. It was less than 3% when the latest code system was used, including continuous-energy Monte Carlo codes and deterministic codes. This is the first time this kind of value has been shown by an extensive international benchmark problem. These results show that even if calculation codes are benchmarked using the well-qualified experimental data before being adopted in the safety review process, it should be understood that some uncertainty in the evaluation of the neutron multiplication factor arising from the uncertainty of the burn-up calculation methodology used still remains

  17. Experimental Test for Benchmark 1--Deck Lid Inner Panel

    International Nuclear Information System (INIS)

    Xu Siguang; Lanker, Terry; Zhang, Jimmy; Wang Chuantao

    2005-01-01

    The Benchmark 1 deck lid inner is designed for both aluminum and steel based on a General Motor Corporation's current vehicle product. The die is constructed with a soft tool material. The die successfully produced aluminum and steel panels without splits and wrinkles. Detailed surface strains and thickness measurement were made at selected sections to include a wide range of deformation patterns from uniaxial tension mode to bi-axial tension mode. The springback measurements were done by using CMM machine along the part's hem edge which is critical to correct dimensional accuracy. It is expected that the data obtained will provide a useful source for forming and springback study on future automotive panels

  18. Benchmarks for GADRAS performance validation

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L. Jr.

    2009-01-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  19. Initialization bias suppression in iterative Monte Carlo calculations: benchmarks on criticality calculation

    International Nuclear Information System (INIS)

    Richet, Y.; Jacquet, O.; Bay, X.

    2005-01-01

    The accuracy of an Iterative Monte Carlo calculation requires the convergence of the simulation output process. The present paper deals with a post processing algorithm to suppress the transient due to initialization applied on criticality calculations. It should be noticed that this initial transient suppression aims only at obtaining a stationary output series, then the convergence of the calculation needs to be guaranteed independently. The transient suppression algorithm consists in a repeated truncation of the first observations of the output process. The truncation of the first observations is performed as long as a steadiness test based on Brownian bridge theory is negative. This transient suppression method was previously tuned for a simplified model of criticality calculations, although this paper focuses on the efficiency on real criticality calculations. The efficiency test is based on four benchmarks with strong source convergence problems: 1) a checkerboard storage of fuel assemblies, 2) a pin cell array with irradiated fuel, 3) 3 one-dimensional thick slabs, and 4) an array of interacting fuel spheres. It appears that the transient suppression method needs to be more widely validated on real criticality calculations before any blind using as a post processing in criticality codes

  20. ZZ ECN-BUBEBO, ECN-Petten Burnup Benchmark Book, Inventories, Afterheat

    International Nuclear Information System (INIS)

    Kloosterman, Jan Leen

    1999-01-01

    Description of program or function: Contains experimental benchmarks which can be used for the validation of burnup code systems and accompanied data libraries. Although the benchmarks presented here are thoroughly described in literature, it is in many cases not straightforward to retrieve unambiguously the correct input data and corresponding results from the benchmark Descriptions. Furthermore, results which can easily be measured, are sometimes difficult to calculate because of conversions to be made. Therefore, emphasis has been put to clarify the input of the benchmarks and to present the benchmark results in such a way that they can easily be calculated and compared. For more thorough Descriptions of the benchmarks themselves, the literature referred to here should be consulted. This benchmark book is divided in 11 chapters/files containing the following in text and tabular form: chapter 1: Introduction; chapter 2: Burnup Credit Criticality Benchmark Phase 1-B; chapter 3: Yankee-Rowe Core V Fuel Inventory Study; chapter 4: H.B. Robinson Unit 2 Fuel Inventory Study; chapter 5: Turkey Point Unit 3 Fuel Inventory Study; chapter 6: Turkey Point Unit 3 Afterheat Power Study; chapter 7: Dickens Benchmark on Fission Product Energy Release of U-235; chapter 8: Dickens Benchmark on Fission Product Energy Release of Pu-239; chapter 9: Yarnell Benchmark on Decay Heat Measurements of U-233; chapter 10: Yarnell Benchmark on Decay Heat Measurements of U-235; chapter 11: Yarnell Benchmark on Decay Heat Measurements of Pu-239

  1. Monte Carlo benchmarking: Validation and progress

    International Nuclear Information System (INIS)

    Sala, P.

    2010-01-01

    Document available in abstract form only. Full text of publication follows: Calculational tools for radiation shielding at accelerators are faced with new challenges from the present and next generations of particle accelerators. All the details of particle production and transport play a role when dealing with huge power facilities, therapeutic ion beams, radioactive beams and so on. Besides the traditional calculations required for shielding, activation predictions have become an increasingly critical component. Comparison and benchmarking with experimental data is obviously mandatory in order to build up confidence in the computing tools, and to assess their reliability and limitations. Thin target particle production data are often the best tools for understanding the predictive power of individual interaction models and improving their performances. Complex benchmarks (e.g. thick target data, deep penetration, etc.) are invaluable in assessing the overall performances of calculational tools when all ingredients are put at work together. A review of the validation procedures of Monte Carlo tools will be presented with practical and real life examples. The interconnections among benchmarks, model development and impact on shielding calculations will be highlighted. (authors)

  2. RB reactor benchmark cores

    International Nuclear Information System (INIS)

    Pesic, M.

    1998-01-01

    A selected set of the RB reactor benchmark cores is presented in this paper. The first results of validation of the well-known Monte Carlo MCNP TM code and adjoining neutron cross section libraries are given. They confirm the idea for the proposal of the new U-D 2 O criticality benchmark system and support the intention to include this system in the next edition of the recent OECD/NEA Project: International Handbook of Evaluated Criticality Safety Experiment, in near future. (author)

  3. Thermal reactor benchmark tests on JENDL-2

    International Nuclear Information System (INIS)

    Takano, Hideki; Tsuchihashi, Keichiro; Yamane, Tsuyoshi; Akino, Fujiyoshi; Ishiguro, Yukio; Ido, Masaru.

    1983-11-01

    A group constant library for the thermal reactor standard nuclear design code system SRAC was produced by using the evaluated nuclear data JENDL-2. Furthermore, the group constants for 235 U were calculated also from ENDF/B-V. Thermal reactor benchmark calculations were performed using the produced group constant library. The selected benchmark cores are two water-moderated lattices (TRX-1 and 2), two heavy water-moderated cores (DCA and ETA-1), two graphite-moderated cores (SHE-8 and 13) and eight critical experiments for critical safety. The effective multiplication factors and lattice cell parameters were calculated and compared with the experimental values. The results are summarized as follows. (1) Effective multiplication factors: The results by JENDL-2 are considerably improved in comparison with ones by ENDF/B-IV. The best agreement is obtained by using JENDL-2 and ENDF/B-V (only 235 U) data. (2) Lattice cell parameters: For the rho 28 (the ratio of epithermal to thermal 238 U captures) and C* (the ratio of 238 U captures to 235 U fissions), the values calculated by JENDL-2 are in good agreement with the experimental values. The rho 28 (the ratio of 238 U to 235 U fissions) are overestimated as found also for the fast reactor benchmarks. The rho 02 (the ratio of epithermal to thermal 232 Th captures) calculated by JENDL-2 or ENDF/B-IV are considerably underestimated. The functions of the SRAC system have been continued to be extended according to the needs of its users. A brief description will be given, in Appendix B, to the extended parts of the SRAC system together with the input specification. (author)

  4. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    International Nuclear Information System (INIS)

    Bess, John D.; Fujimoto, Nozomu

    2014-01-01

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments

  5. Uranium systems to enhance benchmarks for use in the verification of criticality safety computer models. Final report, February 16, 1990--December 31, 1994

    International Nuclear Information System (INIS)

    Busch, R.D.

    1995-01-01

    Dr. Robert Busch of the Department of Chemical and Nuclear Engineering was the principal investigator on this project with technical direction provided by the staff in the Nuclear Criticality Safety Group at Los Alamos. During the period of the contract, he had a number of graduate and undergraduate students working on subtasks. The objective of this work was to develop information on uranium systems to enhance benchmarks for use in the verification of criticality safety computer models. During the first year of this project, most of the work was focused on setting up the SUN SPARC-1 Workstation and acquiring the literature which described the critical experiments. By august 1990, the Workstation was operational with the current version of TWODANT loaded on the system. MCNP, version 4 tape was made available from Los Alamos late in 1990. Various documents were acquired which provide the initial descriptions of the critical experiments under consideration as benchmarks. The next four years were spent working on various benchmark projects. A number of publications and presentations were made on this material. These are briefly discussed in this report

  6. Analysis and evaluation of critical experiments for validation of neutron transport calculations

    International Nuclear Information System (INIS)

    Bazzana, S.; Blaumann, H; Marquez Damian, J.I

    2009-01-01

    The calculation schemes, computational codes and nuclear data used in neutronic design require validation to obtain reliable results. In the nuclear criticality safety field this reliability also translates into a higher level of safety in procedures involving fissile material. The International Criticality Safety Benchmark Evaluation Project is an OECD/NEA activity led by the United States, in which participants from over 20 countries evaluate and publish criticality safety benchmarks. The product of this project is a set of benchmark experiment evaluations that are published annually in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. With the recent participation of Argentina, this information is now available for use by the neutron calculation and criticality safety groups in Argentina. This work presents the methodology used for the evaluation of experimental data, some results obtained by the application of these methods, and some examples of the data available in the Handbook. [es

  7. Shielding Benchmark Computational Analysis

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-01-01

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)

  8. Decoys Selection in Benchmarking Datasets: Overview and Perspectives

    Science.gov (United States)

    Réau, Manon; Langenfeld, Florent; Zagury, Jean-François; Lagarde, Nathalie; Montes, Matthieu

    2018-01-01

    Virtual Screening (VS) is designed to prospectively help identifying potential hits, i.e., compounds capable of interacting with a given target and potentially modulate its activity, out of large compound collections. Among the variety of methodologies, it is crucial to select the protocol that is the most adapted to the query/target system under study and that yields the most reliable output. To this aim, the performance of VS methods is commonly evaluated and compared by computing their ability to retrieve active compounds in benchmarking datasets. The benchmarking datasets contain a subset of known active compounds together with a subset of decoys, i.e., assumed non-active molecules. The composition of both the active and the decoy compounds subsets is critical to limit the biases in the evaluation of the VS methods. In this review, we focus on the selection of decoy compounds that has considerably changed over the years, from randomly selected compounds to highly customized or experimentally validated negative compounds. We first outline the evolution of decoys selection in benchmarking databases as well as current benchmarking databases that tend to minimize the introduction of biases, and secondly, we propose recommendations for the selection and the design of benchmarking datasets. PMID:29416509

  9. Identification of critical parameters for PEMFC stack performance characterization and control strategies for reliable and comparable stack benchmarking

    DEFF Research Database (Denmark)

    Mitzel, Jens; Gülzow, Erich; Kabza, Alexander

    2016-01-01

    This paper is focused on the identification of critical parameters and on the development of reliable methodologies to achieve comparable benchmark results. Possibilities for control sensor positioning and for parameter variation in sensitivity tests are discussed and recommended options for the ...

  10. IAEA coordinated research project on 'analytical and experimental benchmark analyses of accelerator driven systems'

    International Nuclear Information System (INIS)

    Ait-Abderrahim, H.; Stanculescu, A.

    2006-01-01

    This paper provides the general background and the main specifications of the benchmark exercises performed within the framework of the IAEA Coordinated Research Project (CRP) on Analytical and Experimental Benchmark Analyses of Accelerator Driven Systems. The overall objective of the CRP, performed within the framework of the Technical Working Group on Fast Reactors (TWG-FR) of IAEA's Nuclear Energy Dept., is to contribute to the generic R and D efforts in various fields common to innovative fast neutron system development, i.e. heavy liquid metal thermal hydraulics, dedicated transmutation fuels and associated core designs, theoretical nuclear reaction models, measurement and evaluation of nuclear data for transmutation, and development and validation of calculational methods and codes. (authors)

  11. Analysis of the international criticality benchmark no 19 of a realistic fuel dissolver

    International Nuclear Information System (INIS)

    Smith, H.J.; Santamarina, A.

    1991-01-01

    The dispersion of the order of 12000 pcm in the results of the international criticality fuel dissolver benchmark calculation, exercise OECD/19, showed the necessity of analysing the calculational methods used in this case. The APOLLO/PIC method developed to treat this type of problem permits us to propose international reference values. The problem studied here, led us to investigate two supplementary parameters in addition to the double heterogeneity of the fuel: the reactivity variation as a function of moderation and the effects of the size of the fuel pellets during dissolution. The following conclusions were obtained: The fast cross-section sets used by the international SCALE package introduces a bias of - 3000 pcm in undermoderated lattices. More generally, the fast and resonance nuclear data in criticality codes are not sufficiently reliable. Geometries with micro-pellets led to an underestimation of reactivity at the end of dissolution of 3000 pcm in certain 1988 Sn calculations; this bias was avoided in the up-dated 1990 computation because of a correct use of calculation tools. The reactivity introduced by the dissolved fuel is underestimated by 3000 pcm in contributions based on the standard NITAWL module in the SCALE code. More generally, the neutron balance analysis pointed out that standard ND self shielding formalism cannot account for 238 U resonance mutual self-shielding in the pellet-fissile liquor interaction. The combination of these three types of bias explain the underestimation of all of the international contributions of the reactivity of dissolver lattices by -2000 to -6000 pcm. The improved 1990 calculations confirm the need to use rigorous methods in the calculation of systems which involve the fuel double heterogeneity. This study points out the importance of periodic benchmarking exercises for probing the efficacity of criticality codes, data libraries and the users

  12. Summary Report of Consultants' Meeting on Accuracy of Experimental and Theoretical Nuclear Cross-Section Data for Ion Beam Analysis and Benchmarking

    International Nuclear Information System (INIS)

    Abriola, Daniel; Dimitriou, Paraskevi; Gurbich, Alexander F.

    2013-11-01

    A summary is given of a Consultants' Meeting assembled to assess the accuracy of experimental and theoretical nuclear cross-section data for Ion Beam Analysis and the role of benchmarking experiments. The participants discussed the different approaches to assigning uncertainties to evaluated data, and presented results of benchmark experiments performed in their laboratories. They concluded that priority should be given to the validation of cross- section data by benchmark experiments, and recommended that an experts meeting be held to prepare the guidelines, methodology and work program of a future coordinated project on benchmarking.

  13. GROWTH OF THE INTERNATIONAL CRITICALITY SAFETY AND REACTOR PHYSICS EXPERIMENT EVALUATION PROJECTS

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; John D. Bess; Jim Gulliford

    2011-09-01

    Since the International Conference on Nuclear Criticality Safety (ICNC) 2007, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) have continued to expand their efforts and broaden their scope. Eighteen countries participated on the ICSBEP in 2007. Now, there are 20, with recent contributions from Sweden and Argentina. The IRPhEP has also expanded from eight contributing countries in 2007 to 16 in 2011. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments1' have increased from 442 evaluations (38000 pages), containing benchmark specifications for 3955 critical or subcritical configurations to 516 evaluations (nearly 55000 pages), containing benchmark specifications for 4405 critical or subcritical configurations in the 2010 Edition of the ICSBEP Handbook. The contents of the Handbook have also increased from 21 to 24 criticality-alarm-placement/shielding configurations with multiple dose points for each, and from 20 to 200 configurations categorized as fundamental physics measurements relevant to criticality safety applications. Approximately 25 new evaluations and 150 additional configurations are expected to be added to the 2011 edition of the Handbook. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Reactor Physics Benchmark Experiments2' have increased from 16 different experimental series that were performed at 12 different reactor facilities to 53 experimental series that were performed at 30 different reactor facilities in the 2011 edition of the Handbook. Considerable effort has also been made to improve the functionality of the searchable database, DICE (Database for the International Criticality Benchmark Evaluation Project) and verify the accuracy of the data contained therein. DICE will be discussed in separate papers at ICNC 2011. The status of the

  14. CSRL-V ENDF/B-V 227-group neutron cross-section library and its application to thermal-reactor and criticality safety benchmarks

    International Nuclear Information System (INIS)

    Ford, W.E. III; Diggs, B.R.; Knight, J.R.; Greene, N.M.; Petrie, L.M.; Webster, C.C.; Westfall, R.M.; Wright, R.Q.; Williams, M.L.

    1982-01-01

    Characteristics and contents of the CSRL-V (Criticality Safety Reference Library based on ENDF/B-V data) 227-neutron-group AMPX master and pointwise cross-section libraries are described. Results obtained in using CSRL-V to calculate performance parameters of selected thermal reactor and criticality safety benchmarks are discussed

  15. Status of international benchmark experiment for effective delayed neutron fraction ({beta}eff)

    Energy Technology Data Exchange (ETDEWEB)

    Okajima, S.; Sakurai, T.; Mukaiyama, T. [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-03-01

    To improve the prediction accuracy of the {beta}eff, the program of the international benchmark experiment (Beta Effect Reactor Experiment for a New International Collaborative Evaluation: BERNICE) was planned. This program composed of two parts; BERNICE-MASURCA and BERNICE-FCA. The former one was carried out in the fast critical facility MASURCA of CEA, FRANCE between 1993 and 1994. The latter one started in the FCA, JAERI in 1995 and still is going. In these benchmark experiments, various experimental techniques have been applied for in-pile measurements of the {beta}eff. The accuracy of the measurements was better than 3%. (author)

  16. Experimental benchmark of kinetic simulations of capacitively coupled plasmas in molecular gases

    Science.gov (United States)

    Donkó, Z.; Derzsi, A.; Korolov, I.; Hartmann, P.; Brandt, S.; Schulze, J.; Berger, B.; Koepke, M.; Bruneau, B.; Johnson, E.; Lafleur, T.; Booth, J.-P.; Gibson, A. R.; O'Connell, D.; Gans, T.

    2018-01-01

    We discuss the origin of uncertainties in the results of numerical simulations of low-temperature plasma sources, focusing on capacitively coupled plasmas. These sources can be operated in various gases/gas mixtures, over a wide domain of excitation frequency, voltage, and gas pressure. At low pressures, the non-equilibrium character of the charged particle transport prevails and particle-based simulations become the primary tools for their numerical description. The particle-in-cell method, complemented with Monte Carlo type description of collision processes, is a well-established approach for this purpose. Codes based on this technique have been developed by several authors/groups, and have been benchmarked with each other in some cases. Such benchmarking demonstrates the correctness of the codes, but the underlying physical model remains unvalidated. This is a key point, as this model should ideally account for all important plasma chemical reactions as well as for the plasma-surface interaction via including specific surface reaction coefficients (electron yields, sticking coefficients, etc). In order to test the models rigorously, comparison with experimentalbenchmark data’ is necessary. Examples will be given regarding the studies of electron power absorption modes in O2, and CF4-Ar discharges, as well as on the effect of modifications of the parameters of certain elementary processes on the computed discharge characteristics in O2 capacitively coupled plasmas.

  17. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  18. 239Pu prompt fission neutron spectra impact on a set of criticality and experimental reactor benchmarks

    International Nuclear Information System (INIS)

    Peneliau, Y.; Litaize, O.; Archier, P.; De Saint Jean, C.

    2014-01-01

    A large set of nuclear data are investigated to improve the calculation predictions of the new neutron transport simulation codes. With the next generation of nuclear power plants (GEN IV projects), one expects to reduce the calculated uncertainties which are mainly coming from nuclear data and are still very important, before taking into account integral information in the adjustment process. In France, future nuclear power plant concepts will probably use MOX fuel, either in Sodium Fast Reactors or in Gas Cooled Fast Reactors. Consequently, the knowledge of 239 Pu cross sections and other nuclear data is crucial issue in order to reduce these sources of uncertainty. The Prompt Fission Neutron Spectra (PFNS) for 239 Pu are part of these relevant data (an IAEA working group is even dedicated to PFNS) and the work presented here deals with this particular topic. The main international data files (i.e. JEFF-3.1.1, ENDF/B-VII.0, JENDL-4.0, BRC-2009) have been considered and compared with two different spectra, coming from the works of Maslov and Kornilov respectively. The spectra are first compared by calculating their mathematical moments in order to characterize them. Then, a reference calculation using the whole JEFF-3.1.1 evaluation file is performed and compared with another calculation performed with a new evaluation file, in which the data block containing the fission spectra (MF=5, MT=18) is replaced by the investigated spectra (one for each evaluation). A set of benchmarks is used to analyze the effects of PFNS, covering criticality cases and mock-up cases in various neutron flux spectra (thermal, intermediate, and fast flux spectra). Data coming from many ICSBEP experiments are used (PU-SOL-THERM, PU-MET-FAST, PU-MET-INTER and PU-MET-MIXED) and French mock-up experiments are also investigated (EOLE for thermal neutron flux spectrum and MASURCA for fast neutron flux spectrum). This study shows that many experiments and neutron parameters are very sensitive to

  19. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  20. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  1. Verification of HELIOS-MASTER system through benchmark of critical experiments

    International Nuclear Information System (INIS)

    Kim, H. Y.; Kim, K. Y.; Cho, B. O.; Lee, C. C.; Zee, S. O.

    1999-01-01

    The HELIOS-MASTER code system is verified through the benchmark of the critical experiments that were performed by RRC 'Kurchatov Institute' with water-moderated hexagonally pitched lattices of highly enriched Uranium fuel rods (80w/o). We also used the same input by using the MCNP code that was described in the evaluation report, and compared our results with those of the evaluation report. HELIOS, developed by Scandpower A/S, is a two-dimensional transport program for the generation of group cross-sections, and MASTER, developed by KAERI, is a three-dimensional nuclear design and analysis code based on the two-group diffusion theory. It solves neutronics model with the AFEN (Analytic Function Expansion Nodal) method for hexagonal geometry. The results show that the HELIOS-MASTER code system is fast and accurate enough to be used as nuclear core analysis tool for hexagonal geometry

  2. WWER-1000 Burnup Credit Benchmark (CB5)

    International Nuclear Information System (INIS)

    Manolova, M.A.

    2002-01-01

    In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)

  3. OECD/NEA expert group on uncertainty analysis for criticality safety assessment: Results of benchmark on sensitivity calculation (phase III)

    Energy Technology Data Exchange (ETDEWEB)

    Ivanova, T.; Laville, C. [Institut de Radioprotection et de Surete Nucleaire IRSN, BP 17, 92262 Fontenay aux Roses (France); Dyrda, J. [Atomic Weapons Establishment AWE, Aldermaston, Reading, RG7 4PR (United Kingdom); Mennerdahl, D. [E Mennerdahl Systems EMS, Starvaegen 12, 18357 Taeby (Sweden); Golovko, Y.; Raskach, K.; Tsiboulia, A. [Inst. for Physics and Power Engineering IPPE, 1, Bondarenko sq., 249033 Obninsk (Russian Federation); Lee, G. S.; Woo, S. W. [Korea Inst. of Nuclear Safety KINS, 62 Gwahak-ro, Yuseong-gu, Daejeon 305-338 (Korea, Republic of); Bidaud, A.; Sabouri, P. [Laboratoire de Physique Subatomique et de Cosmologie LPSC, CNRS-IN2P3/UJF/INPG, Grenoble (France); Patel, A. [U.S. Nuclear Regulatory Commission (NRC), Washington, DC 20555-0001 (United States); Bledsoe, K.; Rearden, B. [Oak Ridge National Laboratory ORNL, M.S. 6170, P.O. Box 2008, Oak Ridge, TN 37831 (United States); Gulliford, J.; Michel-Sendis, F. [OECD/NEA, 12, Bd des Iles, 92130 Issy-les-Moulineaux (France)

    2012-07-01

    The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)

  4. Benchmarking for On-Scalp MEG Sensors.

    Science.gov (United States)

    Xie, Minshu; Schneiderman, Justin F; Chukharkin, Maxim L; Kalabukhov, Alexei; Riaz, Bushra; Lundqvist, Daniel; Whitmarsh, Stephen; Hamalainen, Matti; Jousmaki, Veikko; Oostenveld, Robert; Winkler, Dag

    2017-06-01

    We present a benchmarking protocol for quantitatively comparing emerging on-scalp magnetoencephalography (MEG) sensor technologies to their counterparts in state-of-the-art MEG systems. As a means of validation, we compare a high-critical-temperature superconducting quantum interference device (high T c SQUID) with the low- T c SQUIDs of an Elekta Neuromag TRIUX system in MEG recordings of auditory and somatosensory evoked fields (SEFs) on one human subject. We measure the expected signal gain for the auditory-evoked fields (deeper sources) and notice some unfamiliar features in the on-scalp sensor-based recordings of SEFs (shallower sources). The experimental results serve as a proof of principle for the benchmarking protocol. This approach is straightforward, general to various on-scalp MEG sensors, and convenient to use on human subjects. The unexpected features in the SEFs suggest on-scalp MEG sensors may reveal information about neuromagnetic sources that is otherwise difficult to extract from state-of-the-art MEG recordings. As the first systematically established on-scalp MEG benchmarking protocol, magnetic sensor developers can employ this method to prove the utility of their technology in MEG recordings. Further exploration of the SEFs with on-scalp MEG sensors may reveal unique information about their sources.

  5. Benchmark of Space Charge Simulations and Comparison with Experimental Results for High Intensity, Low Energy Accelerators

    CERN Document Server

    Cousineau, Sarah M

    2005-01-01

    Space charge effects are a major contributor to beam halo and emittance growth leading to beam loss in high intensity, low energy accelerators. As future accelerators strive towards unprecedented levels of beam intensity and beam loss control, a more comprehensive understanding of space charge effects is required. A wealth of simulation tools have been developed for modeling beams in linacs and rings, and with the growing availability of high-speed computing systems, computationally expensive problems that were inconceivable a decade ago are now being handled with relative ease. This has opened the field for realistic simulations of space charge effects, including detailed benchmarks with experimental data. A great deal of effort is being focused in this direction, and several recent benchmark studies have produced remarkably successful results. This paper reviews the achievements in space charge benchmarking in the last few years, and discusses the challenges that remain.

  6. Benchmark analysis of SPERT-IV reactor with Monte Carlo code MVP

    International Nuclear Information System (INIS)

    Motalab, M.A.; Mahmood, M.S.; Khan, M.J.H.; Badrun, N.H.; Lyric, Z.I.; Altaf, M.H.

    2014-01-01

    Highlights: • MVP was used for SPERT-IV core modeling. • Neutronics analysis of SPERT-IV reactor was performed. • Calculation performed to estimate critical rod height, excess reactivity. • Neutron flux, time integrated neutron flux and Cd-ratio also calculated. • Calculated values agree with experimental data. - Abstract: The benchmark experiment of the SPERT-IV D-12/25 reactor core has been analyzed with the Monte Carlo code MVP using the cross-section libraries based on JENDL-3.3. The MVP simulation was performed for the clean and cold core. The estimated values of K eff at the experimental critical rod height and the core excess reactivity were within 5% with the experimental data. Thermal neutron flux profiles at different vertical and horizontal positions of the core were also estimated. Cadmium Ratio at different point of the core was also estimated. All estimated results have been compared with the experimental results. Generally good agreement has been found between experimentally determined and the calculated results

  7. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint U.S./Russian Progress Report for Fiscal Year 1997 Volume 2-Calculations Performed in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Primm III, RT

    2002-05-29

    This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the US during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the computational benchmarks and for those experimental benchmarks that the US and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.

  8. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint U.S./Russian Progress Report for Fiscal Year 1997 Volume 2-Calculations Performed in the United States

    International Nuclear Information System (INIS)

    Primm III, RT

    2002-01-01

    This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the US during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the computational benchmarks and for those experimental benchmarks that the US and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors

  9. Tests of HAMMER (original) and HAMMER-TECHNION systems with critical experiments

    International Nuclear Information System (INIS)

    Santos, A. dos

    1986-01-01

    Performances of the reactor cell codes HAMMER (original) and HAMMER-TECHNION were tested against experimental results of critical benchmarks. The option made was the utilization of consistent methodologies so that only the NIT (Nordheim Integral Technique) was utilized in the HAMMER-TECHNION. All differences encountered in the analysis made with these systems can be attributed to their basic nuclear data library. Five critical benchmarks was utilized on this study. Surprisingly, the performance of the original HAMMER system was betterthan that of the HAMMER-TECHNION. (Author) [pt

  10. Benchmark Analysis Of The High Temperature Gas Cooled Reactors Using Monte Carlo Technique

    International Nuclear Information System (INIS)

    Nguyen Kien Cuong; Huda, M.Q.

    2008-01-01

    Information about several past and present experimental and prototypical facilities based on High Temperature Gas-Cooled Reactor (HTGR) concepts have been examined to assess the potential of these facilities for use in this benchmarking effort. Both reactors and critical facilities applicable to pebble-bed type cores have been considered. Two facilities - HTR-PROTEUS of Switzerland and HTR-10 of China and one conceptual design from Germany - HTR-PAP20 - appear to have the greatest potential for use in benchmarking the codes. This study presents the benchmark analysis of these reactors technologies by using MCNP4C2 and MVP/GMVP Codes to support the evaluation and future development of HTGRs. The ultimate objective of this work is to identify and develop new capabilities needed to support Generation IV initiative. (author)

  11. Power reactor pressure vessel benchmarks

    International Nuclear Information System (INIS)

    Rahn, F.J.

    1978-01-01

    A review is given of the current status of experimental and calculational benchmarks for use in understanding the radiation embrittlement effects in the pressure vessels of operating light water power reactors. The requirements of such benchmarks for application to pressure vessel dosimetry are stated. Recent developments in active and passive neutron detectors sensitive in the ranges of importance to embrittlement studies are summarized and recommendations for improvements in the benchmark are made. (author)

  12. Review for session K - benchmarks

    International Nuclear Information System (INIS)

    McCracken, A.K.

    1980-01-01

    Eight of the papers to be considered in Session K are directly concerned, at least in part, with the Pool Critical Assembly (P.C.A.) benchmark at Oak Ridge. The remaining seven papers in this session, the subject of this review, are concerned with a variety of topics related to the general theme of Benchmarks and will be considered individually

  13. Research coordination meeting of the coordinated research project on analytical and experimental benchmark analyses of accelerator driven systems. Working material

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    's overall objective is to make contributions towards the realization of a transmutation demonstration facility. The specific objective of the CRP is to improve the present understanding of the coupling of the ADS spallation source with the multiplicative sub-critical core. As outcome, the CRP aims at advancing the efforts under way in the Member States towards the proof of practicality for ADS based transmutation by providing an information exchange and collaborative research framework needed to ensure that the tools to perform detailed ADS calculations, namely from the high energy proton beam down to thermal neutron energies, are available. The CRP will address all major physics phenomena of the spallation source and its coupling to the sub-critical core. The participants will perform computational and experimental benchmark analyses using integrated calculation schemes and simulation methods. Apart from analytical benchmark exercises, the CRP will integrate some of the planned experimental demonstration projects of the coupling at power between a sub-critical core and a spallation source (e.g., YALINA Booster in Belarus and SAD at JINR, Dubna). The estimated duration of the CRP is 5 years. Following the establishment, during 2004, of the international CRP team by putting in place research agreements and contracts, and after convening this kick-off research RCM, the implementation plan of the CRP foresees three more RCMs (in 2007, 2008, and 2009, respectively), and the publication of the final report in 2010.

  14. Numisheet2005 Benchmark Analysis on Forming of an Automotive Underbody Cross Member: Benchmark 2

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    This report presents an international cooperation benchmark effort focusing on simulations of a sheet metal stamping process. A forming process of an automotive underbody cross member using steel and aluminum blanks is used as a benchmark. Simulation predictions from each submission are analyzed via comparison with the experimental results. A brief summary of various models submitted for this benchmark study is discussed. Prediction accuracy of each parameter of interest is discussed through the evaluation of cumulative errors from each submission

  15. Reactor group constants and benchmark test

    Energy Technology Data Exchange (ETDEWEB)

    Takano, Hideki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-08-01

    The evaluated nuclear data files such as JENDL, ENDF/B-VI and JEF-2 are validated by analyzing critical mock-up experiments for various type reactors and assessing applicability for nuclear characteristics such as criticality, reaction rates, reactivities, etc. This is called Benchmark Testing. In the nuclear calculations, the diffusion and transport codes use the group constant library which is generated by processing the nuclear data files. In this paper, the calculation methods of the reactor group constants and benchmark test are described. Finally, a new group constants scheme is proposed. (author)

  16. Benchmark Evaluation of Plutonium Nitrate Solution Arrays

    International Nuclear Information System (INIS)

    Marshall, M.A.; Bess, J.D.

    2011-01-01

    In October and November of 1981 thirteen approach-to-critical experiments were performed on a remote split table machine (RSTM) in the Critical Mass Laboratory of Pacific Northwest Laboratory (PNL) in Richland, Washington, using planar arrays of polyethylene bottles filled with plutonium (Pu) nitrate solution. Arrays of up to sixteen bottles were used to measure the critical number of bottles and critical array spacing with a tight fitting Plexiglas(reg s ign) reflector on all sides of the arrays except the top. Some experiments used Plexiglas shells fitted around each bottles to determine the effect of moderation on criticality. Each bottle contained approximately 2.4 L of Pu(NO3)4 solution with a Pu content of 105 g Pu/L and a free acid molarity H+ of 5.1. The plutonium was of low 240Pu (2.9 wt.%) content. These experiments were performed to fill a gap in experimental data regarding criticality limits for storing and handling arrays of Pu solution in reprocessing facilities. Of the thirteen approach-to-critical experiments eleven resulted in extrapolations to critical configurations. Four of the approaches were extrapolated to the critical number of bottles; these were not evaluated further due to the large uncertainty associated with the modeling of a fraction of a bottle. The remaining seven approaches were extrapolated to critical array spacing of 3-4 and 4-4 arrays; these seven critical configurations were evaluation for inclusion as acceptable benchmark experiments in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Detailed and simple models of these configurations were created and the associated bias of these simplifications was determined to range from 0.00116 and 0.00162 ± 0.00006 ?keff. Monte Carlo analysis of all models was completed using MCNP5 with ENDF/BVII.0 neutron cross section libraries. A thorough uncertainty analysis of all critical, geometric, and material parameters was performed using parameter

  17. Compilation report of VHTRC temperature coefficient benchmark calculations

    Energy Technology Data Exchange (ETDEWEB)

    Yasuda, Hideshi; Yamane, Tsuyoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1995-11-01

    A calculational benchmark problem has been proposed by JAERI to an IAEA Coordinated Research Program, `Verification of Safety Related Neutronic Calculation for Low-enriched Gas-cooled Reactors` to investigate the accuracy of calculation results obtained by using codes of the participating countries. This benchmark is made on the basis of assembly heating experiments at a pin-in block type critical assembly, VHTRC. Requested calculation items are the cell parameters, effective multiplication factor, temperature coefficient of reactivity, reaction rates, fission rate distribution, etc. Seven institutions from five countries have joined the benchmark works. Calculation results are summarized in this report with some remarks by the authors. Each institute analyzed the problem by applying the calculation code system which was prepared for the HTGR development of individual country. The values of the most important parameter, k{sub eff}, by all institutes showed good agreement with each other and with the experimental ones within 1%. The temperature coefficient agreed within 13%. The values of several cell parameters calculated by several institutes did not agree with the other`s ones. It will be necessary to check the calculation conditions again for getting better agreement. (J.P.N.).

  18. Criticality benchmark results for the ENDF60 library with MCNP trademark

    International Nuclear Information System (INIS)

    Keen, N.D.; Frankle, S.C.; MacFarlane, R.E.

    1995-01-01

    The continuous-energy neutron data library ENDF60, for use with the Monte Carlo N-Particle radiation transport code MCNP4A, was released in the fall of 1994. The ENDF60 library is comprised of 124 nuclide data files based on the ENDF/B-VI (B-VI) evaluations through Release 2. Fifty-two percent of these B-VI evaluations are translations from ENDF/B-V (B-V). The remaining forty-eight percent are new evaluations which have sometimes changed significantly. Among these changes are greatly increased use of isotopic evaluations, more extensive resonance-parameter evaluations, and energy-angle correlated distributions for secondary particles. In particular, the upper energy limit for the resolved resonance region of 235 U, 238 U and 239 Pu has been extended from 0.082, 4.0, and 0.301 keV to 2..25, 10.0, and 2.5 keV respectively. As regulatory oversight has advanced and performing critical experiments has become more difficult, there has been an increased reliance on computational methods. For the criticality safety community, the performance of the combined transport code and data library is of interest. The purpose of this abstract is to provide benchmarking results to aid the user in determining the best data library for their application

  19. A Heterogeneous Medium Analytical Benchmark

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1999-01-01

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results

  20. Systematic approach to establishing criticality biases

    International Nuclear Information System (INIS)

    Larson, S.L.

    1995-09-01

    A systematic approach has been developed to determine benchmark biases and apply those biases to code results to meet the requirements of DOE Order 5480.24 regarding documenting criticality safety margins. Previously, validation of the code against experimental benchmarks to prove reasonable agreement was sufficient. However, DOE Order 5480.24 requires contractors to adhere to the requirements of ANSI/ANS-8.1 and establish subcritical margins. A method was developed to incorporate biases and uncertainties from benchmark calculations into a k eff value with quantifiable uncertainty. The method produces a 95% confidence level in both the k eff value of the scenario modeled and the distribution of the k eff S calculated by the Monte Carlo code. Application of the method to a group of benchmarks modeled using the KENO-Va code and the SCALE 27 group cross sections is also presented

  1. New calculations for critical assemblies using MCNP4B

    International Nuclear Information System (INIS)

    Adams, A.A.; Frankle, S.C.; Little, R.C.

    1997-07-01

    A suite of 41 criticality benchmarks has been modeled using MCNP trademark (version 4B). Most of the assembly specifications were obtained from the Cross Section Evaluation Working Group (CSEWG) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) compendiums of experimental benchmarks. A few assembly specifications were obtained from experimental papers. The suite contains thermal and fast assemblies, bare and reflected assemblies, and emphasizes 233 U, 235 U, 238 U, and 239 Pu. The values of k eff for each assembly in the suite were calculated using MCNP libraries derived primarily from release 2 of ENDF/B-V and release 2 of ENDF/B-VI. The results show that the new ENDF/B-VI.2 evaluations for H, O, N, B, 235 U, 238 U, and 239 Pu can have a significant impact on the values of k eff . In addition to the integral quantity k eff , several additional experimental measurements were performed and documented. These experimental measurements include central fission and reaction-rate ratios for various isotopes, and neutron leakage and flux spectra. They provide more detailed information about the accuracy of the nuclear data than can k eff . Comparison calculations were performed using both ENDF/B-V.2 and ENDF/B-VI.2-based data libraries. The purpose of this paper is to compare the results of these additional calculations with experimental data, and to use these results to assess the quality of the nuclear data

  2. Benchmark thermal-hydraulic analysis with the Agathe Hex 37-rod bundle

    International Nuclear Information System (INIS)

    Barroyer, P.; Hudina, M.; Huggenberger, M.

    1981-09-01

    Different computer codes are compared, in prediction performance, based on the AGATHE HEX 37-rod bundle experimental results. The compilation of all available calculation results allows a critical assessment of the codes. For the time being, it is concluded which codes are best suited for gas cooled fuel element design purposes. Based on the positive aspects of these cooperative Benchmark exercises, an attempt is made to define a computer code verification procedure. (Auth.)

  3. Coupled fast-thermal core 'HERBE', as the benchmark experiment at the RB reactor

    International Nuclear Information System (INIS)

    Pesic, M.

    2003-10-01

    Validation of the well-known Monte Carlo code MCNP TM against measured criticality data for the coupled fast-thermal HERBE. System at the RB research reactor is shown in this paper. Experimental data are obtained for regular HERBE core and for the cases of controlled flooding of the neutron converter zone by heavy water. Earlier calculations of these criticality parameters, done by combination of transport and diffusion codes using 2D geometry model are also compared to new calculations carried out by the MCNP code in 3D geometry, applying new detailed 3D model of the HEU fuel slug, developed recently. Satisfactory agreements in comparison of the HERBE criticality calculation results with experimental data, in spite complex heterogeneous composition of the HERBE core, are obtained and confirmed that HERBE core could be used as a criticality benchmark for coupled fast-thermal core. (author)

  4. Availability of Neutronics Benchmarks in the ICSBEP and IRPhEP Handbooks for Computational Tools Testing

    Energy Technology Data Exchange (ETDEWEB)

    Bess, John D.; Briggs, J. Blair; Ivanova, Tatiana; Hill, Ian; Gulliford, Jim

    2017-02-01

    In the past several decades, numerous experiments have been performed worldwide to support reactor operations, measurements, design, and nuclear safety. Those experiments represent an extensive international investment in infrastructure, expertise, and cost, representing significantly valuable resources of data supporting past, current, and future research activities. Those valuable assets represent the basis for recording, development, and validation of our nuclear methods and integral nuclear data [1]. The loss of these experimental data, which has occurred all too much in the recent years, is tragic. The high cost to repeat many of these measurements can be prohibitive, if not impossible, to surmount. Two international projects were developed, and are under the direction of the Organisation for Co-operation and Development Nuclear Energy Agency (OECD NEA) to address the challenges of not just data preservation, but evaluation of the data to determine its merit for modern and future use. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was established to identify and verify comprehensive critical benchmark data sets; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data [2]. Similarly, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications [3]. Annually, contributors from around the world continue to collaborate in the evaluation and review of select benchmark experiments for preservation and dissemination. The extensively peer-reviewed integral benchmark data can then be utilized to support nuclear design and safety analysts to validate the analytical tools, methods, and data needed for next

  5. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Kucukboyaci, Vefa N.

    2015-01-01

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  6. Benchmarking in digital circuit design automation

    NARCIS (Netherlands)

    Jozwiak, L.; Gawlowski, D.M.; Slusarczyk, A.S.

    2008-01-01

    This paper focuses on benchmarking, which is the main experimental approach to the design method and EDA-tool analysis, characterization and evaluation. We discuss the importance and difficulties of benchmarking, as well as the recent research effort related to it. To resolve several serious

  7. Benchmark calculation for water reflected STACY cores containing low enriched uranyl nitrate solution

    International Nuclear Information System (INIS)

    Miyoshi, Yoshinori; Yamamoto, Toshihiro; Nakamura, Takemi

    2001-01-01

    In order to validate the availability of criticality calculation codes and related nuclear data library, a series of fundamental benchmark experiments on low enriched uranyl nitrate solution have been performed with a Static Experiment Criticality Facility, STACY in JAERI. The basic core composed of a single tank with water reflector was used for accumulating the systematic data with well-known experimental uncertainties. This paper presents the outline of the core configurations of STACY, the standard calculation model, and calculation results with a Monte Carlo code and JENDL 3.2 nuclear data library. (author)

  8. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  9. Benchmarking Using Basic DBMS Operations

    Science.gov (United States)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  10. Experimental facilities of Valduc critical station

    International Nuclear Information System (INIS)

    Mangin, D.; Maubert, L.

    1975-01-01

    The critical facility of Valduc and its experimentation possibilities are described. The different experimental programs carried out since 1962 are briefly reviewed. The last program involving a plutonium nitrate solution (18.9wt% 240 Pu) in a large parallelepipedic tank is presented and main results given [fr

  11. Benchmarking

    OpenAIRE

    Meylianti S., Brigita

    1999-01-01

    Benchmarking has different meaning to different people. There are five types of benchmarking, namely internal benchmarking, competitive benchmarking, industry / functional benchmarking, process / generic benchmarking and collaborative benchmarking. Each type of benchmarking has its own advantages as well as disadvantages. Therefore it is important to know what kind of benchmarking is suitable to a specific application. This paper will discuss those five types of benchmarking in detail, includ...

  12. Experimental benchmark data for PWR rod bundle with spacer-grids

    International Nuclear Information System (INIS)

    Dominguez-Ontiveros, Elvis E.; Hassan, Yassin A.; Conner, Michael E.; Karoutas, Zeses

    2012-01-01

    In numerical simulations of fuel rod bundle flow fields, the unsteady Navier–Stokes equations have to be solved in order to determine the time (phase) dependent characteristics of the flow. In order to validate the simulations results, detailed comparison with experimental data must be done. Experiments investigating complex flows in rod bundles with spacer grids that have mixing devices (such as flow mixing vanes) have mostly been performed using single-point measurements. In order to obtain more details and insight on the discrepancies between experimental and numerical data as well as to obtain a global understanding of the causes of these discrepancies, comparisons of the distributions of complete phase-averaged velocity and turbulence fields for various locations near spacer-grids should be performed. The experimental technique Particle Image Velocimetry (PIV) is capable of providing such benchmark data. This paper describes an experimental database obtained using two-dimensional Time Resolved Particle Image Velocimetry (TR-PIV) measurements within a 5 × 5 PWR rod bundle with spacer-grids that have flow mixing vanes. One of the unique characteristic of this set-up is the use of the Matched Index of Refraction technique employed in this investigation to allow complete optical access to the rod bundle. This unique feature allows flow visualization and measurement within the bundle without rod obstruction. This approach also allows the use of high temporal and spatial non-intrusive dynamic measurement techniques namely TR-PIV to investigate the flow evolution below and immediately above the spacer. The experimental data presented in this paper includes explanation of the various cases tested such as test rig dimensions, measurement zones, the test equipment and the boundary conditions in order to provide appropriate data for comparison with Computational Fluid Dynamics (CFD) simulations. Turbulence parameters of the obtained data are presented in order to gain

  13. Benchmark calculation of subchannel analysis codes

    International Nuclear Information System (INIS)

    1996-02-01

    In order to evaluate the analysis capabilities of various subchannel codes used in thermal-hydraulic design of light water reactors, benchmark calculations were performed. The selected benchmark problems and major findings obtained by the calculations were as follows: (1)As for single-phase flow mixing experiments between two channels, the calculated results of water temperature distribution along the flow direction were agreed with experimental results by tuning turbulent mixing coefficients properly. However, the effect of gap width observed in the experiments could not be predicted by the subchannel codes. (2)As for two-phase flow mixing experiments between two channels, in high water flow rate cases, the calculated distributions of air and water flows in each channel were well agreed with the experimental results. In low water flow cases, on the other hand, the air mixing rates were underestimated. (3)As for two-phase flow mixing experiments among multi-channels, the calculated mass velocities at channel exit under steady-state condition were agreed with experimental values within about 10%. However, the predictive errors of exit qualities were as high as 30%. (4)As for critical heat flux(CHF) experiments, two different results were obtained. A code indicated that the calculated CHF's using KfK or EPRI correlations were well agreed with the experimental results, while another code suggested that the CHF's were well predicted by using WSC-2 correlation or Weisman-Pei mechanistic model. (5)As for droplets entrainment and deposition experiments, it was indicated that the predictive capability was significantly increased by improving correlations. On the other hand, a remarkable discrepancy between codes was observed. That is, a code underestimated the droplet flow rate and overestimated the liquid film flow rate in high quality cases, while another code overestimated the droplet flow rate and underestimated the liquid film flow rate in low quality cases. (J.P.N.)

  14. Burn-up TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Persic, A.; Ravnik, M.; Zagar, T.

    1998-01-01

    Different reactor codes are used for calculations of reactor parameters. The accuracy of the programs is tested through comparison of the calculated values with the experimental results. Well-defined and accurately measured benchmarks are required. The experimental results of reactivity measurements, fuel element reactivity worth distribution and fuel-up measurements are presented in this paper. The experiments were performed with partly burnt reactor core. The experimental conditions were well defined, so that the results can be used as a burn-up benchmark test case for a TRIGA Mark II reactor calculations.(author)

  15. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1

    International Nuclear Information System (INIS)

    Van Der Marck, S. C.

    2012-01-01

    Three nuclear data libraries have been tested extensively using criticality safety benchmark calculations. The three libraries are the new release of the US library ENDF/B-VII.1 (2011), the new release of the Japanese library JENDL-4.0 (2011), and the OECD/NEA library JEFF-3.1 (2006). All calculations were performed with the continuous-energy Monte Carlo code MCNP (version 4C3, as well as version 6-beta1). Around 2000 benchmark cases from the International Handbook of Criticality Safety Benchmark Experiments (ICSBEP) were used. The results were analyzed per ICSBEP category, and per element. Overall, the three libraries show similar performance on most criticality safety benchmarks. The largest differences are probably caused by elements such as Be, C, Fe, Zr, W. (authors)

  16. Experimental benchmark and code validation for airfoils equipped with passive vortex generators

    International Nuclear Information System (INIS)

    Baldacchino, D; Ferreira, C; Florentie, L; Timmer, N; Van Zuijlen, A; Manolesos, M; Chaviaropoulos, T; Diakakis, K; Papadakis, G; Voutsinas, S; González Salcedo, Á; Aparicio, M; García, N R.; Sørensen, N N.; Troldborg, N

    2016-01-01

    Experimental results and complimentary computations for airfoils with vortex generators are compared in this paper, as part of an effort within the AVATAR project to develop tools for wind turbine blade control devices. Measurements from two airfoils equipped with passive vortex generators, a 30% thick DU97W300 and an 18% thick NTUA T18 have been used for benchmarking several simulation tools. These tools span low-to-high complexity, ranging from engineering-level integral boundary layer tools to fully-resolved computational fluid dynamics codes. Results indicate that with appropriate calibration, engineering-type tools can capture the effects of vortex generators and outperform more complex tools. Fully resolved CFD comes at a much higher computational cost and does not necessarily capture the increased lift due to the VGs. However, in lieu of the limited experimental data available for calibration, high fidelity tools are still required for assessing the effect of vortex generators on airfoil performance. (paper)

  17. A heat transport benchmark problem for predicting the impact of measurements on experimental facility design

    International Nuclear Information System (INIS)

    Cacuci, Dan Gabriel

    2016-01-01

    Highlights: • Predictive Modeling of Coupled Multi-Physics Systems (PM_CMPS) methodology is used. • Impact of measurements for reducing predicted uncertainties is highlighted. • Presented thermal-hydraulics benchmark illustrates generally applicable concepts. - Abstract: This work presents the application of the “Predictive Modeling of Coupled Multi-Physics Systems” (PM_CMPS) methodology conceived by Cacuci (2014) to a “test-section benchmark” problem in order to quantify the impact of measurements for reducing the uncertainties in the conceptual design of a proposed experimental facility aimed at investigating the thermal-hydraulics characteristics expected in the conceptual design of the G4M reactor (GEN4ENERGY, 2012). This “test-section benchmark” simulates the conditions experienced by the hottest rod within the conceptual design of the facility's test section, modeling the steady-state conduction in a rod heated internally by a cosinus-like heat source, as typically encountered in nuclear reactors, and cooled by forced convection to a surrounding coolant flowing along the rod. The PM_CMPS methodology constructs a prior distribution using all of the available computational and experimental information, by relying on the maximum entropy principle to maximize the impact of all available information and minimize the impact of ignorance. The PM_CMPS methodology then constructs the posterior distribution using Bayes’ theorem, and subsequently evaluates it via saddle-point methods to obtain explicit formulas for the predicted optimal temperature distributions and predicted optimal values for the thermal-hydraulics model parameters that characterized the test-section benchmark. In addition, the PM_CMPS methodology also yields reduced uncertainties for both the model parameters and responses. As a general rule, it is important to measure a quantity consistently with, and more accurately than, the information extant prior to the measurement. For

  18. TRX and UO2 criticality benchmarks with SAM-CE

    International Nuclear Information System (INIS)

    Beer, M.; Troubetzkoy, E.S.; Lichtenstein, H.; Rose, P.F.

    1980-01-01

    A set of thermal reactor benchmark calculations with SAM-CE which have been conducted at both MAGI and at BNL are described. Their purpose was both validation of the SAM-CE reactor eigenvalue capability developed by MAGI and a substantial contribution to the data testing of both ENDF/B-IV and ENDF/B-V libraries. This experience also resulted in increased calculational efficiency of the code and an example is given. The benchmark analysis included the TRX-1 infinite cell using both ENDF/B-IV and ENDF/B-V cross section sets and calculations using ENDF/B-IV of the TRX-1 full core and TRX-2 cell. BAPL-UO2-1 calculations were conducted for the cell using both ENDF/B-IV and ENDF/B-V and for the full core with ENDF/B-V

  19. Comparison of typical inelastic analysis predictions with benchmark problem experimental results

    International Nuclear Information System (INIS)

    Clinard, J.A.; Corum, J.M.; Sartory, W.K.

    1975-01-01

    The results of exemplary inelastic analyses are presented for a series of experimental benchmark problems. Consistent analytical procedures and constitutive relations were used in each of the analyses, and published material behavior data were used in all cases. Two finite-element inelastic computer programs were employed. These programs implement the analysis procedures and constitutive equations for Type 304 stainless steel that are currently used in many analyses of elevated-temperature nuclear reactor system components. The analysis procedures and constitutive relations are briefly discussed, and representative analytical results are presented and compared to the test data. The results that are presented demonstrate the feasibility of performing inelastic analyses, and they are indicative of the general level of agreement that the analyst might expect when using conventional inelastic analysis procedures. (U.S.)

  20. KENO-IV code benchmark calculation, (6)

    International Nuclear Information System (INIS)

    Nomura, Yasushi; Naito, Yoshitaka; Yamakawa, Yasuhiro.

    1980-11-01

    A series of benchmark tests has been undertaken in JAERI in order to examine the capability of JAERI's criticality safety evaluation system consisting of the Monte Carlo calculation code KENO-IV and the newly developed multigroup constants library MGCL. The present report describes the results of a benchmark test using criticality experiments about Plutonium fuel in various shape. In all, 33 cases of experiments have been calculated for Pu(NO 3 ) 4 aqueous solution, Pu metal or PuO 2 -polystyrene compact in various shape (sphere, cylinder, rectangular parallelepiped). The effective multiplication factors calculated for the 33 cases distribute widely between 0.955 and 1.045 due to wide range of system variables. (author)

  1. Benchmark Calibration Tests Completed for Stirling Convertor Heater Head Life Assessment

    Science.gov (United States)

    Krause, David L.; Halford, Gary R.; Bowman, Randy R.

    2005-01-01

    A major phase of benchmark testing has been completed at the NASA Glenn Research Center (http://www.nasa.gov/glenn/), where a critical component of the Stirling Radioisotope Generator (SRG) is undergoing extensive experimentation to aid the development of an analytical life-prediction methodology. Two special-purpose test rigs subjected SRG heater-head pressure-vessel test articles to accelerated creep conditions, using the standard design temperatures to stay within the wall material s operating creep-response regime, but increasing wall stresses up to 7 times over the design point. This resulted in well-controlled "ballooning" of the heater-head hot end. The test plan was developed to provide critical input to analytical parameters in a reasonable period of time.

  2. Benchmarking quantum mechanical calculations with experimental NMR chemical shifts of 2-HADNT

    Science.gov (United States)

    Liu, Yuemin; Junk, Thomas; Liu, Yucheng; Tzeng, Nianfeng; Perkins, Richard

    2015-04-01

    In this study, both GIAO-DFT and GIAO-MP2 calculations of nuclear magnetic resonance (NMR) spectra were benchmarked with experimental chemical shifts. The experimental chemical shifts were determined experimentally for carbon-13 (C-13) of seven carbon atoms for the TNT degradation product 2-hydroxylamino-4,6-dinitrotoluene (2-HADNT). Quantum mechanics GIAO calculations were implemented using Becke-3-Lee-Yang-Parr (B3LYP) and other six hybrid DFT methods (Becke-1-Lee-Yang-Parr (B1LYP), Becke-half-and-half-Lee-Yang-Parr (BH and HLYP), Cohen-Handy-3-Lee-Yang-Parr (O3LYP), Coulomb-attenuating-B3LYP (CAM-B3LYP), modified-Perdew-Wang-91-Lee-Yang-Parr (mPW1LYP), and Xu-3-Lee-Yang-Parr (X3LYP)) which use the same correlation functional LYP. Calculation results showed that the GIAO-MP2 method gives the most accurate chemical shift values, and O3LYP method provides the best prediction of chemical shifts among the B3LYP and other five DFT methods. Three types of atomic partial charges, Mulliken (MK), electrostatic potential (ESP), and natural bond orbital (NBO), were also calculated using MP2/aug-cc-pVDZ method. A reasonable correlation was discovered between NBO partial charges and experimental chemical shifts of carbon-13 (C-13).

  3. Benchmark calculation of SCALE-PC 4.3 CSAS6 module and burnup credit criticality analysis

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Hee Sung; Ro, Seong Gy; Shin, Young Joon; Kim, Ik Soo [Korea Atomic Energy Research Institute, Taejon (Korea)

    1998-12-01

    Calculation biases of SCALE-PC CSAS6 module for PWR spent fuel, metallized spent fuel and solution of nuclear materials have been determined on the basis of the benchmark to be 0.01100, 0.02650 and 0.00997, respectively. With the aid of the code system, nuclear criticality safety analysis for the spent fuel storage pool has been carried out to determine the minimum burnup of spent fuel required for safe storage. The criticality safety analysis is performed using three types of isotopic composition of spent fuel: ORIGEN2-calculated isotopic compositions; the conservative inventory obtained from the multiplication of ORIGEN2-calculated isotopic compositions by isotopic correction factors; the conservative inventory of only U, Pu and {sup 241}Am. The results show that the minimum burnup for three cases are 990,6190 and 7270 MWd/tU, respectively in the case of 5.0 wt% initial enriched spent fuel. (author). 74 refs., 68 figs., 35 tabs.

  4. Benchmark critical experiments on low-enriched uranium oxide systems with H/U = 0.77

    International Nuclear Information System (INIS)

    Tuck, G.; Oh, I.

    1979-08-01

    Ten benchmark experiments were performed at the Critical Mass Laboratory at Rockwell International's Rocky Flats Plant, Golden, Colorado, for the US Nuclear Regulatory Commission. They provide accurate criticality data for low-enriched damp uranium oxide (U 3 O 8 ) systems. The core studied consisted of 152 mm cubical aluminum cans containing an average of 15,129 g of low-enriched (4.46% 235 U) uranium oxide compacted to a density of 4.68 g/cm 3 and with an H/U atomic ratio of 0.77. One hundred twenty five (125) of these cans were arranged in an approx. 770 mm cubical array. Since the oxide alone cannot be made critical in an array of this size, an enriched (approx. 93% 235 U) metal or solution driver was used to achieve criticality. Measurements are reported for systems having the least practical reflection and for systems reflected by approx. 254-mm-thick concrete or plastic. Under the three reflection conditions, the mass of the uranium metal driver ranged from 29.87 kg to 33.54 kg for an oxide core of 1864.6 kg. For an oxide core of 1824.9 kg, the weight of the high concentration (351.2 kg U/m 3 ) solution driver varied from 14.07 kg to 16.14 kg, and the weight of the low concentration (86.4 kg U/m 3 ) solution driver from 12.4 kg to 14.0 kg

  5. Review of studies on criticality safety evaluation and criticality experiment methods

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Yamamoto, Toshihiro; Misawa, Tsuyoshi; Yamane, Yuichi

    2013-01-01

    Since the early 1960s, many studies on criticality safety evaluation have been conducted in Japan. Computer code systems were developed initially by employing finite difference methods, and more recently by using Monte Carlo methods. Criticality experiments have also been carried out in many laboratories in Japan as well as overseas. By effectively using these study results, the Japanese Criticality Safety Handbook was published in 1988, almost the intermediate point of the last 50 years. An increased interest has been shown in criticality safety studies, and a Working Party on Nuclear Criticality Safety (WPNCS) was set up by the Nuclear Science Committee of Organisation Economic Co-operation and Development in 1997. WPNCS has several task forces in charge of each of the International Criticality Safety Benchmark Evaluation Program (ICSBEP), Subcritical Measurement, Experimental Needs, Burn-up Credit Studies and Minimum Critical Values. Criticality safety studies in Japan have been carried out in cooperation with WPNCS. This paper describes criticality safety study activities in Japan along with the contents of the Japanese Criticality Safety Handbook and the tasks of WPNCS. (author)

  6. Application of an integrated PC-based neutronics code system to criticality safety

    International Nuclear Information System (INIS)

    Briggs, J.B.; Nigg, D.W.

    1991-01-01

    An integrated system of neutronics and radiation transport software suitable for operation in an IBM PC-class environment has been under development at the Idaho National Engineering Laboratory (INEL) for the past four years. Four modules within the system are particularly useful for criticality safety applications. Using the neutronics portion of the integrated code system, effective neutron multiplication values (k eff values) have been calculated for a variety of benchmark critical experiments for metal systems (Plutonium and Uranium), Aqueous Systems (Plutonium and Uranium) and LWR fuel rod arrays. A description of the codes and methods used in the analysis and the results of the benchmark critical experiments are presented in this paper. In general, excellent agreement was found between calculated and experimental results. (Author)

  7. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-11-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  8. Integral benchmarks with reference to thorium fuel cycle

    International Nuclear Information System (INIS)

    Ganesan, S.

    2003-01-01

    This is a power point presentation about the Indian participation in the CRP 'Evaluated Data for the Thorium-Uranium fuel cycle'. The plans and scope of the Indian participation are to provide selected integral experimental benchmarks for nuclear data validation, including Indian Thorium burn up benchmarks, post-irradiation examination studies, comparison of basic evaluated data files and analysis of selected benchmarks for Th-U fuel cycle

  9. The development of code benchmarks

    International Nuclear Information System (INIS)

    Glass, R.E.

    1986-01-01

    Sandia National Laboratories has undertaken a code benchmarking effort to define a series of cask-like problems having both numerical solutions and experimental data. The development of the benchmarks includes: (1) model problem definition, (2) code intercomparison, and (3) experimental verification. The first two steps are complete and a series of experiments are planned. The experiments will examine the elastic/plastic behavior of cylinders for both the end and side impacts resulting from a nine meter drop. The cylinders will be made from stainless steel and aluminum to give a range of plastic deformations. This paper presents the results of analyses simulating the model's behavior using materials properties for stainless steel and aluminum

  10. INTEGRAL BENCHMARK DATA FOR NUCLEAR DATA TESTING THROUGH THE ICSBEP AND THE NEWLY ORGANIZED IRPHEP

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-04-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) was last reported in a nuclear data conference at the International Conference on Nuclear Data for Science and Technology, ND-2004, in Santa Fe, New Mexico. Since that time the number and type of integral benchmarks have increased significantly. Included in the ICSBEP Handbook are criticality-alarm / shielding and fundamental physic benchmarks in addition to the traditional critical / subcritical benchmark data. Since ND 2004, a reactor physics counterpart to the ICSBEP, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. The IRPhEP is patterned after the ICSBEP, but focuses on other integral measurements, such as buckling, spectral characteristics, reactivity effects, reactivity coefficients, kinetics measurements, reaction-rate and power distributions, nuclide compositions, and other miscellaneous-type measurements in addition to the critical configuration. The status of these two projects is discussed and selected benchmarks highlighted in this paper.

  11. Comparison of typical inelastic analysis predictions with benchmark problem experimental results

    International Nuclear Information System (INIS)

    Clinard, J.A.; Corum, J.M.; Sartory, W.K.

    1975-01-01

    The results of exemplary inelastic analyses for experimental benchmark problems on reactor components are presented. Consistent analytical procedures and constitutive relations were used in each of the analyses, and the material behavior data presented in the Appendix were used in all cases. Two finite-element inelastic computer programs were employed. These programs implement the analysis procedures and constitutive equations for type 304 stainless steel that are currently used in many analyses of elevated-temperature nuclear reactor system components. The analysis procedures and constitutive relations are briefly discussed, and representative analytical results are presented and compared to the test data. The results that are presented demonstrate the feasibility of performing inelastic analyses for the types of problems discussed, and they are indicative of the general level of agreement that the analyst might expect when using conventional inelastic analysis procedures. (U.S.)

  12. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  13. STACY and TRACY: nuclear criticality experimental facilities under construction

    International Nuclear Information System (INIS)

    Kobayashi, I.; Takeshita, I.; Yanagisawa, H.; Tsujino, T.

    1992-01-01

    Japan Atomic Energy Research Institute is constructing a Nuclear Fuel Cycle Safety Engineering Research Facility, NUCEF, where the following research themes essential for evaluating safety problems relating to back-end technology in nuclear fuel cycle facilities will be studied: nuclear criticality safety research; research on advanced reprocessing processes and partitioning; and research on transuranic waste treatment and disposal. To perform nuclear criticality safety research related to the reprocessing of light water reactor spent fuels, two criticality experimental facilities, STACY and TRACY, are under construction. STACY (Static Criticality Facility) will be used for the study of criticality conditions of solution fuels, uranium, plutonium and their mixtures. TRACY (Transient Criticality Facility) will be used to investigate criticality accident phenomena with uranium solutions. The construction progress and experimental programmes are described in this Paper. (author)

  14. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-03-01

    The purpose of this research is to provide a fundamental computational investigation into the possible integration of experimental activities with the Advanced Test Reactor Critical (ATR-C) facility with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of integral data for improving neutron cross sections. Further assessment of oscillation

  15. OECD/NEA Burnup Credit Calculational Criticality Benchmark Phase I-B Results

    International Nuclear Information System (INIS)

    DeHart, M.D.

    1993-01-01

    Burnup credit is an ongoing technical concern for many countries that operate commercial nuclear power reactors. In a multinational cooperative effort to resolve burnup credit issues, a Burnup Credit Working Group has been formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development. This working group has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide, and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods are in agreement to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods are within 11% agreement about the average for all fission products studied. Furthermore, most deviations are less than 10%, and many are less than 5%. The exceptions are 149 Sm, 151 Sm, and 155 Gd

  16. Definition and Analysis of Heavy Water Reactor Benchmarks for Testing New Wims-D Libraries

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2000-01-01

    This work is part of the IAEA-WIMS Library Update Project (WLUP). A group of heavy water reactor benchmarks have been selected for testing new WIMS-D libraries, including calculations with WIMSD5B program and the analysis of results.These benchmarks cover a wide variety of reactors and conditions, from fresh fuels to high burnup, and from natural to enriched uranium.Besides, each benchmark includes variations in lattice pitch and in coolants (normally heavy water and void).Multiplication factors with critical experimental bucklings and other parameters are calculated and compared with experimental reference values.The WIMS libraries used for the calculations were generated with basic data from JEF-2.2 Rev.3 (JEF) and ENDF/B-VI iNReleaseln 5 (E6) Results obtained with WIMS-86 (W86) library, included with WIMSD5B package, from Windfrith, UK with adjusted data, are included also, for showing the improvements obtained with the new -not adjusted- libraries.The calculations with WIMSD5B were made with two methods (input program options): PIJ (two-dimension collision probability method) and DSN (one-dimension Sn method, with homogenization of materials by ring).The general conclusions are: the library based on JEF data and the DSN meted give the best results, that in average are acceptable

  17. Definition and Analysis of Heavy Water Reactor Benchmarks for Testing New Wims-D Libraries; Definicion y Analisis de Benchmarks de Reactores de Agua Pesada para Pruebas de Nuevas Bibliotecas de Datos Wims-D

    Energy Technology Data Exchange (ETDEWEB)

    Leszczynski, Francisco [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)

    2000-07-01

    This work is part of the IAEA-WIMS Library Update Project (WLUP). A group of heavy water reactor benchmarks have been selected for testing new WIMS-D libraries, including calculations with WIMSD5B program and the analysis of results.These benchmarks cover a wide variety of reactors and conditions, from fresh fuels to high burnup, and from natural to enriched uranium.Besides, each benchmark includes variations in lattice pitch and in coolants (normally heavy water and void).Multiplication factors with critical experimental bucklings and other parameters are calculated and compared with experimental reference values.The WIMS libraries used for the calculations were generated with basic data from JEF-2.2 Rev.3 (JEF) and ENDF/B-VI iNReleaseln 5 (E6) Results obtained with WIMS-86 (W86) library, included with WIMSD5B package, from Windfrith, UK with adjusted data, are included also, for showing the improvements obtained with the new -not adjusted- libraries.The calculations with WIMSD5B were made with two methods (input program options): PIJ (two-dimension collision probability method) and DSN (one-dimension Sn method, with homogenization of materials by ring).The general conclusions are: the library based on JEF data and the DSN meted give the best results, that in average are acceptable.

  18. Benchmark calculations by KENO-Va using the JEF 2.2 library

    Energy Technology Data Exchange (ETDEWEB)

    Markova, L.

    1994-12-01

    This work has to be a contribution to the validation of the JEF2.2 neutron cross-section libarary, following the earlier published benchmark calculations having been performed to validate the previous version JEF1.1 of the libarary. Several simple calculational problems and one experimental problem were chosen for a criticality calculations. In addition also a realistic hexagonal arrangement of the VVER-440 fuel assemblies in a spent fuel cask were analyzed in a partly cylindrized model. All criticality calculations, carried out by the KENO-Va code using the JEF2.2 neutron cross-section library in 172 energy groups, resulted in multiplication factors (k{sub eff}) which were tabulated and compared with the results of other available calculations of the same problems. (orig.).

  19. Experimental study of low-titre critical two-phase flows

    International Nuclear Information System (INIS)

    Seynhaeve, Jean-Marie

    1980-02-01

    This report for engineering graduation addresses the analysis of two-phase critical flows obtained by expansion of a saturated or under-cooled liquid. For a titre greater than 0,1, theoretical studies give a rather good prediction of critical flow rates, whereas in the case of a lower titre, results obtained by published studies display some discrepancies, and the test duct geometry and important unbalances between phases seem to be at the origin of these discrepancies. In order to study these origins of discrepancies, three test campaigns have been performed: on a test duct provided by the CENG, on two long tubes, and on holes. Thus, after a bibliographical study which outlines drawbacks of previous studies, the author proposes a detailed description of experimental installations (creation of critical flows, measurement chain, measurement processing, measurement device calibration, quality and precision). Experimental results are then systematically explored, and differences are explained. The author then addresses the theoretical aspect of the determination of critical flow rates by reviewing calculation models and by comparing their results with experimental results. The validity of each model is thus discussed. The author then proposes a calculation model which can be applied to critical flows developed in holes. This model is notably inspired by experimental conclusions and gives very satisfying practical results

  20. A review of the current state-of-the-art methodology for handling bias and uncertainty in performing criticality safety evaluations. Final report

    International Nuclear Information System (INIS)

    Disney, R.K.

    1994-10-01

    The methodology for handling bias and uncertainty when calculational methods are used in criticality safety evaluations (CSE's) is a rapidly evolving technology. The changes in the methodology are driven by a number of factors. One factor responsible for changes in the methodology for handling bias and uncertainty in CSE's within the overview of the US Department of Energy (DOE) is a shift in the overview function from a ''site'' perception to a more uniform or ''national'' perception. Other causes for change or improvement in the methodology for handling calculational bias and uncertainty are; (1) an increased demand for benchmark criticals data to expand the area (range) of applicability of existing data, (2) a demand for new data to supplement existing benchmark criticals data, (3) the increased reliance on (or need for) computational benchmarks which supplement (or replace) experimental measurements in critical assemblies, and (4) an increased demand for benchmark data applicable to the expanded range of conditions and configurations encountered in DOE site restoration and remediation

  1. IRPhEP-handbook, International Handbook of Evaluated Reactor Physics Benchmark Experiments

    International Nuclear Information System (INIS)

    Sartori, Enrico; Blair Briggs, J.

    2008-01-01

    experimental series that were performed at 17 different reactor facilities. The Handbook is organized in a manner that allows easy inclusion of additional evaluations, as they become available. Additional evaluations are in progress and will be added to the handbook periodically. Content: FUND - Fundamental; GCR - Gas Cooled (Thermal) Reactor; HWR - Heavy Water Moderated Reactor; LMFR - Liquid Metal Fast Reactor; LWR - Light Water Moderated Reactor; PWR - Pressurized Water Reactor; VVER - VVER Reactor; Evaluations published as drafts 2 - Related Information: International Criticality Safety Benchmark Evaluation Project (ICSBEP); IRPHE/B and W-SS-LATTICE, Spectral Shift Reactor Lattice Experiments; IRPHE-JAPAN, Reactor Physics Experiments carried out in Japan ; IRPHE/JOYO MK-II, JOYO MK-II core management and characteristics database ; IRPhE/RRR-SEG, Reactor Physics Experiments from Fast-Thermal Coupled Facility; IRPHE-SNEAK, KFK SNEAK Fast Reactor Experiments, Primary Documentation ; IRPhE/STEK, Reactor Physics Experiments from Fast-Thermal Coupled Facility ; IRPHE-ZEBRA, AEEW Fast Reactor Experiments, Primary Documentation ; IRPHE-DRAGON-DPR, OECD High Temperature Reactor Dragon Project, Primary Documents; IRPHE-ARCH-01, Archive of HTR Primary Documents ; IRPHE/AVR, AVR High Temperature Reactor Experience, Archival Documentation ; IRPHE-KNK-II-ARCHIVE, KNK-II fast reactor documents, power history and measured parameters; IRPhE/BERENICE, effective delayed neutron fraction measurements ; IRPhE-TAPIRO-ARCHIVE, fast neutron source reactor primary documents, reactor physics experiments. The International Handbook of Evaluated Reactor Physics Benchmark Experiments was prepared by a working party comprised of experienced reactor physics personnel from Belgium, Brazil, Canada, P.R. of China, Germany, Hungary, Japan, Republic of Korea, Russian Federation, Switzerland, United Kingdom, and the United States of America. The IRPhEP Handbook is available to authorised requesters from the

  2. OECD/NEA Burnup Credit Calculational Criticality Benchmark Phase I-B Results

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, M.D.

    1993-01-01

    Burnup credit is an ongoing technical concern for many countries that operate commercial nuclear power reactors. In a multinational cooperative effort to resolve burnup credit issues, a Burnup Credit Working Group has been formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development. This working group has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide, and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods are in agreement to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods are within 11% agreement about the average for all fission products studied. Furthermore, most deviations are less than 10%, and many are less than 5%. The exceptions are {sup 149}Sm, {sup 151}Sm, and {sup 155}Gd.

  3. Benchmarking Is Associated With Improved Quality of Care in Type 2 Diabetes

    Science.gov (United States)

    Hermans, Michel P.; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos

    2013-01-01

    OBJECTIVE To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. RESEARCH DESIGN AND METHODS Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. RESULTS Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile. PMID:23846810

  4. ADS experimental benchmarks of VENUS-1 in China

    International Nuclear Information System (INIS)

    Xia Haihong; Xia Pu; Han Yinlu

    2013-01-01

    The present report describes here are the calculation of four nuclear data libraries on China ADS Venus-1 sub critical facility, using same calculation code: the Monte Carlo code MCNP-5. The libraries are ENDF/B-VI.6, ENDF/B-7, CENDL 3.1 and Library ADS 2.0. The results of k eff , K p , Λ, l p and β eff for four thermal fuel configurations and the total neutron flux, the neutron flux distributions, neutron spectra in experimental channel for two thermal fuel configurations which driven by external neutron source (D-D and D-T source) are evaluated. (J.P.N.)

  5. OECD/NRC Benchmark Based on NUPEC PWR Sub-channel and Bundle Test (PSBT). Volume I: Experimental Database and Final Problem Specifications

    International Nuclear Information System (INIS)

    Rubin, A.; Schoedel, A.; Avramova, M.; Utsuno, H.; Bajorek, S.; Velazquez-Lozada, A.

    2012-01-01

    The need to refine models for best-estimate calculations, based on good-quality experimental data, has been expressed in many recent meetings in the field of nuclear applications. The needs arising in this respect should not be limited to the currently available macroscopic methods but should be extended to next-generation analysis techniques that focus on more microscopic processes. One of the most valuable databases identified for the thermal-hydraulics modelling was developed by the Nuclear Power Engineering Corporation (NUPEC), Japan, which includes sub-channel void fraction and departure from nucleate boiling (DNB) measurements in a representative Pressurised Water Reactor (PWR) fuel assembly. Part of this database has been made available for this international benchmark activity entitled 'NUPEC PWR Sub-channel and Bundle Tests (PSBT) benchmark'. This international project has been officially approved by the Japanese Ministry of Economy, Trade, and Industry (METI), the US Nuclear Regulatory Commission (NRC) and endorsed by the OECD/NEA. The benchmark team has been organised based on the collaboration between Japan and the USA. A large number of international experts have agreed to participate in this programme. The fine-mesh high-quality sub-channel void fraction and departure from nucleate boiling data encourages advancement in understanding and modelling complex flow behaviour in real bundles. Considering that the present theoretical approach is relatively immature, the benchmark specification is designed so that it will systematically assess and compare the participants' analytical models on the prediction of detailed void distributions and DNB. The development of truly mechanistic models for DNB prediction is currently underway. The benchmark problem includes both macroscopic and microscopic measurement data. In this context, the sub-channel grade void fraction data are regarded as the macroscopic data and the digitised computer graphic images are the

  6. Benchmarking the new JENDL-4.0 library on criticality experiments of a research reactor with oxide LEU (20 w/o) fuel, light water moderator and beryllium reflectors

    International Nuclear Information System (INIS)

    Liem, Peng Hong; Sembiring, Tagor Malem

    2012-01-01

    Highlights: ► Benchmark calculations of the new JENDL-4.0 library. ► Thermal research reactor with oxide LEU fuel, H 2 O moderator and Be reflector. ► JENDL-4.0 library shows better C/E values for criticality evaluations. - Abstract: Benchmark calculations of the new JENDL-4.0 library on the criticality experiments of a thermal research reactor with oxide low enriched uranium (LEU, 20 w/o) fuel, light water moderator and beryllium reflector (RSG GAS) have been conducted using a continuous energy Monte Carlo code, MVP-II. The JENDL-4.0 library shows better C/E values compared to the former library JENDL-3.3 and other world-widely used latest libraries (ENDF/B-VII.0 and JEFF-3.1).

  7. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  8. Benefits of the delta K of depletion benchmarks for burnup credit validation

    International Nuclear Information System (INIS)

    Lancaster, D.; Machiels, A.

    2012-01-01

    Pressurized Water Reactor (PWR) burnup credit validation is demonstrated using the benchmarks for quantifying fuel reactivity decrements, published as 'Benchmarks for Quantifying Fuel Reactivity Depletion Uncertainty,' EPRI Report 1022909 (August 2011). This demonstration uses the depletion module TRITON available in the SCALE 6.1 code system followed by criticality calculations using KENO-Va. The difference between the predicted depletion reactivity and the benchmark's depletion reactivity is a bias for the criticality calculations. The uncertainty in the benchmarks is the depletion reactivity uncertainty. This depletion bias and uncertainty is used with the bias and uncertainty from fresh UO 2 critical experiments to determine the criticality safety limits on the neutron multiplication factor, k eff . The analysis shows that SCALE 6.1 with the ENDF/B-VII 238-group cross section library supports the use of a depletion bias of only 0.0015 in delta k if cooling is ignored and 0.0025 if cooling is credited. The uncertainty in the depletion bias is 0.0064. Reliance on the ENDF/B V cross section library produces much larger disagreement with the benchmarks. The analysis covers numerous combinations of depletion and criticality options. In all cases, the historical uncertainty of 5% of the delta k of depletion ('Kopp memo') was shown to be conservative for fuel with more than 30 GWD/MTU burnup. Since this historically assumed burnup uncertainty is not a function of burnup, the Kopp memo's recommended bias and uncertainty may be exceeded at low burnups, but its absolute magnitude is small. (authors)

  9. Benchmark experiments to test plutonium and stainless steel cross sections. Topical report

    International Nuclear Information System (INIS)

    Jenquin, U.P.; Bierman, S.R.

    1978-06-01

    The Nuclear Regulatory Commission (NRC) commissioned Battelle, Pacific Northwest Laboratory (PNL) to ascertain the accuracy of the neutron cross sections for the isotopes of plutonium and the constituents of stainless steel and determine if improvements can be made in application to criticality safety analysis. NRC's particular area of interest is in the transportation of light-water reactor spent fuel assemblies. The project was divided into two tasks. The first task was to define a set of integral experimental measurements (benchmarks). The second task is to use these benchmarks in neutronics calculations such that the accuracy of ENDF/B-IV plutonium and stainless steel cross sections can be assessed. The results of the first task are given in this report. A set of integral experiments most pertinent to testing the cross sections has been identified and the code input data for calculating each experiment has been developed

  10. Validation of the WIMSD4M cross-section generation code with benchmark results

    International Nuclear Information System (INIS)

    Deen, J.R.; Woodruff, W.L.; Leal, L.E.

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section libraries for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D 2 O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented

  11. Validation of the WIMSD4M cross-section generation code with benchmark results

    Energy Technology Data Exchange (ETDEWEB)

    Deen, J.R.; Woodruff, W.L. [Argonne National Lab., IL (United States); Leal, L.E. [Oak Ridge National Lab., TN (United States)

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section libraries for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.

  12. Benchmarks of subcriticality in accelerator-driven system at Kyoto University Critical Assembly

    Directory of Open Access Journals (Sweden)

    Cheol Ho Pyeon

    2017-09-01

    Full Text Available Basic research on the accelerator-driven system is conducted by combining 235U-fueled and 232Th-loaded cores in the Kyoto University Critical Assembly with the pulsed neutron generator (14 MeV neutrons and the proton beam accelerator (100 MeV protons with a heavy metal target. The results of experimental subcriticality are presented with a wide range of subcriticality level between near critical and 10,000 pcm, as obtained by the pulsed neutron source method, the Feynman-α method, and the neutron source multiplication method.

  13. Numisheet2005 Benchmark Analysis on Forming of an Automotive Deck Lid Inner Panel: Benchmark 1

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    Numerical simulations in sheet metal forming processes have been a very challenging topic in industry. There are many computer codes and modeling techniques existing today. However, there are many unknowns affecting the prediction accuracy. Systematic benchmark tests are needed to accelerate the future implementations and to provide as a reference. This report presents an international cooperative benchmark effort for an automotive deck lid inner panel. Predictions from simulations are analyzed and discussed against the corresponding experimental results. The correlations between accuracy of each parameter of interest are discussed in this report

  14. Benchmark models, planes lines and points for future SUSY searches at the LHC

    International Nuclear Information System (INIS)

    AbdusSalam, S.S.; Allanach, B.C.; Dreiner, H.K.

    2012-03-01

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  15. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  16. Benchmark Models, Planes, Lines and Points for Future SUSY Searches at the LHC

    CERN Document Server

    AbdusSalam, S S; Dreiner, H K; Ellis, J; Ellwanger, U; Gunion, J; Heinemeyer, S; Krämer, M; Mangano, M L; Olive, K A; Rogerson, S; Roszkowski, L; Schlaffer, M; Weiglein, G

    2011-01-01

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  17. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-07-01

    The purpose of this document is to identify some suggested types of experiments that can be performed in the Advanced Test Reactor Critical (ATR-C) facility. A fundamental computational investigation is provided to demonstrate possible integration of experimental activities in the ATR-C with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of

  18. Three-dimensional RAMA fluence methodology benchmarking

    International Nuclear Information System (INIS)

    Baker, S. P.; Carter, R. G.; Watkins, K. E.; Jones, D. B.

    2004-01-01

    This paper describes the benchmarking of the RAMA Fluence Methodology software, that has been performed in accordance with U. S. Nuclear Regulatory Commission Regulatory Guide 1.190. The RAMA Fluence Methodology has been developed by TransWare Enterprises Inc. through funding provided by the Electric Power Research Inst., Inc. (EPRI) and the Boiling Water Reactor Vessel and Internals Project (BWRVIP). The purpose of the software is to provide an accurate method for calculating neutron fluence in BWR pressure vessels and internal components. The Methodology incorporates a three-dimensional deterministic transport solution with flexible arbitrary geometry representation of reactor system components, previously available only with Monte Carlo solution techniques. Benchmarking was performed on measurements obtained from three standard benchmark problems which include the Pool Criticality Assembly (PCA), VENUS-3, and H. B. Robinson Unit 2 benchmarks, and on flux wire measurements obtained from two BWR nuclear plants. The calculated to measured (C/M) ratios range from 0.93 to 1.04 demonstrating the accuracy of the RAMA Fluence Methodology in predicting neutron flux, fluence, and dosimetry activation. (authors)

  19. Nuclear Criticality Experimental Research Center (NCERC) Overview

    Energy Technology Data Exchange (ETDEWEB)

    Goda, Joetta Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Grove, Travis Justin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hayes, David Kirk [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Myers, William L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sanchez, Rene Gerardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-03

    The mission of the National Criticality Experiments Research Center (NCERC) at the Device Assembly Facility (DAF) is to conduct experiments and training with critical assemblies and fissionable material at or near criticality in order to explore reactivity phenomena, and to operate the assemblies in the regions from subcritical through delayed critical. One critical assembly, Godiva-IV, is designed to operate above prompt critical. The Nuclear Criticality Experimental Research Center (NCERC) is our nation’s only general-purpose critical experiments facility and is only one of a few that remain operational throughout the world. This presentation discusses the history of NCERC, the general activities that makeup work at NCERC, and the various government programs and missions that NCERC supports. Recent activities at NCERC will be reviewed, with a focus on demonstrating how NCERC meets national security mission goals using engineering fundamentals. In particular, there will be a focus on engineering theory and design and applications of engineering fundamentals at NCERC. NCERC activities that relate to engineering education will also be examined.

  20. Test One to Test Many: A Unified Approach to Quantum Benchmarks

    Science.gov (United States)

    Bai, Ge; Chiribella, Giulio

    2018-04-01

    Quantum benchmarks are routinely used to validate the experimental demonstration of quantum information protocols. Many relevant protocols, however, involve an infinite set of input states, of which only a finite subset can be used to test the quality of the implementation. This is a problem, because the benchmark for the finitely many states used in the test can be higher than the original benchmark calculated for infinitely many states. This situation arises in the teleportation and storage of coherent states, for which the benchmark of 50% fidelity is commonly used in experiments, although finite sets of coherent states normally lead to higher benchmarks. Here, we show that the average fidelity over all coherent states can be indirectly probed with a single setup, requiring only two-mode squeezing, a 50-50 beam splitter, and homodyne detection. Our setup enables a rigorous experimental validation of quantum teleportation, storage, amplification, attenuation, and purification of noisy coherent states. More generally, we prove that every quantum benchmark can be tested by preparing a single entangled state and measuring a single observable.

  1. Data base of reactor physics experimental results in Kyoto University critical assembly experimental facilities

    International Nuclear Information System (INIS)

    Ichihara, Chihiro; Fujine, Shigenori; Hayashi, Masatoshi

    1986-01-01

    The Kyoto University critical assembly experimental facilities belong to the Kyoto University Research Reactor Institute, and are the versatile critical assembly constructed for experimentally studying reactor physics and reactor engineering. The facilities are those for common utilization by universities in whole Japan. During more than ten years since the initial criticality in 1974, various experiments on reactor physics and reactor engineering have been carried out using many experimental facilities such as two solidmoderated cores, a light water-moderated core and a neutron generator. The kinds of the experiment carried out were diverse, and to find out the required data from them is very troublesome, accordingly it has become necessary to make a data base which can be processed by a computer with the data accumulated during the past more than ten years. The outline of the data base, the data base CAEX using personal computers, the data base supported by a large computer and so on are reported. (Kako, I.)

  2. Overview of Experiments for Physics of Fast Reactors from the International Handbooks of Evaluated Criticality Safety Benchmark Experiments and Evaluated Reactor Physics Benchmark Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Bess, J. D.; Briggs, J. B.; Gulliford, J.; Ivanova, T.; Rozhikhin, E. V.; Semenov, M. Yu.; Tsibulya, A. M.; Koscheev, V. N.

    2017-07-01

    Overview of Experiments to Study the Physics of Fast Reactors Represented in the International Directories of Critical and Reactor Experiments John D. Bess Idaho National Laboratory Jim Gulliford, Tatiana Ivanova Nuclear Energy Agency of the Organisation for Economic Cooperation and Development E.V.Rozhikhin, M.Yu.Sem?nov, A.M.Tsibulya Institute of Physics and Power Engineering The study the physics of fast reactors traditionally used the experiments presented in the manual labor of the Working Group on Evaluation of sections CSEWG (ENDF-202) issued by the Brookhaven National Laboratory in 1974. This handbook presents simplified homogeneous model experiments with relevant experimental data, as amended. The Nuclear Energy Agency of the Organization for Economic Cooperation and Development coordinates the activities of two international projects on the collection, evaluation and documentation of experimental data - the International Project on the assessment of critical experiments (1994) and the International Project on the assessment of reactor experiments (since 2005). The result of the activities of these projects are replenished every year, an international directory of critical (ICSBEP Handbook) and reactor (IRPhEP Handbook) experiments. The handbooks present detailed models of experiments with minimal amendments. Such models are of particular interest in terms of the settlements modern programs. The directories contain a large number of experiments which are suitable for the study of physics of fast reactors. Many of these experiments were performed at specialized critical stands, such as BFS (Russia), ZPR and ZPPR (USA), the ZEBRA (UK) and the experimental reactor JOYO (Japan), FFTF (USA). Other experiments, such as compact metal assembly, is also of interest in terms of the physics of fast reactors, they have been carried out on the universal critical stands in Russian institutes (VNIITF and VNIIEF) and the US (LANL, LLNL, and others.). Also worth mentioning

  3. Benchmark analysis of MCNP trademark ENDF/B-VI iron

    International Nuclear Information System (INIS)

    Court, J.D.; Hendricks, J.S.

    1994-12-01

    The MCNP ENDF/B-VI iron cross-section data was subjected to four benchmark studies as part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Science and the Defense Nuclear Agency. The four benchmark studies were: (1) the iron sphere benchmarks from the Lawrence Livermore Pulsed Spheres; (2) the Oak Ridge National Laboratory Fusion Reactor Shielding Benchmark; (3) a 76-cm diameter iron sphere benchmark done at the University of Illinois; (4) the Oak Ridge National Laboratory Benchmark for Neutron Transport through Iron. MCNP4A was used to model each benchmark and computational results from the ENDF/B-VI iron evaluations were compared to ENDF/B-IV, ENDF/B-V, the MCNP Recommended Data Set (which includes Los Alamos National Laboratory Group T-2 evaluations), and experimental data. The results show that the ENDF/B-VI iron evaluations are as good as, or better than, previous data sets

  4. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  5. Reflections on Critical Thinking: Lessons from a Quasi-Experimental Study

    Science.gov (United States)

    Grussendorf, Jeannie; Rogol, Natalie C.

    2018-01-01

    In a pre/post quasi-experimental study assessing the impact of a specific curriculum on critical thinking, the authors employed a critical thinking curriculum in two sections of a U.S. foreign policy class. The authors found that the interactive and scaffolded critical thinking curriculum yielded statistically significant critical thinking…

  6. Experimental Benchmarking of Fire Modeling Simulations. Final Report

    International Nuclear Information System (INIS)

    Greiner, Miles; Lopez, Carlos

    2003-01-01

    A series of large-scale fire tests were performed at Sandia National Laboratories to simulate a nuclear waste transport package under severe accident conditions. The test data were used to benchmark and adjust the Container Analysis Fire Environment (CAFE) computer code. CAFE is a computational fluid dynamics fire model that accurately calculates the heat transfer from a large fire to a massive engulfed transport package. CAFE will be used in transport package design studies and risk analyses

  7. Educating Next Generation Nuclear Criticality Safety Engineers at the Idaho National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    J. D. Bess; J. B. Briggs; A. S. Garcia

    2011-09-01

    One of the challenges in educating our next generation of nuclear safety engineers is the limitation of opportunities to receive significant experience or hands-on training prior to graduation. Such training is generally restricted to on-the-job-training before this new engineering workforce can adequately provide assessment of nuclear systems and establish safety guidelines. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) can provide students and young professionals the opportunity to gain experience and enhance critical engineering skills. The ICSBEP and IRPhEP publish annual handbooks that contain evaluations of experiments along with summarized experimental data and peer-reviewed benchmark specifications to support the validation of neutronics codes, nuclear cross-section data, and the validation of reactor designs. Participation in the benchmark process not only benefits those who use these Handbooks within the international community, but provides the individual with opportunities for professional development, networking with an international community of experts, and valuable experience to be used in future employment. Traditionally students have participated in benchmarking activities via internships at national laboratories, universities, or companies involved with the ICSBEP and IRPhEP programs. Additional programs have been developed to facilitate the nuclear education of students while participating in the benchmark projects. These programs include coordination with the Center for Space Nuclear Research (CSNR) Next Degree Program, the Collaboration with the Department of Energy Idaho Operations Office to train nuclear and criticality safety engineers, and student evaluations as the basis for their Master's thesis in nuclear engineering.

  8. Educating Next Generation Nuclear Criticality Safety Engineers at the Idaho National Laboratory

    International Nuclear Information System (INIS)

    Bess, J.D.; Briggs, J.B.; Garcia, A.S.

    2011-01-01

    One of the challenges in educating our next generation of nuclear safety engineers is the limitation of opportunities to receive significant experience or hands-on training prior to graduation. Such training is generally restricted to on-the-job-training before this new engineering workforce can adequately provide assessment of nuclear systems and establish safety guidelines. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) can provide students and young professionals the opportunity to gain experience and enhance critical engineering skills. The ICSBEP and IRPhEP publish annual handbooks that contain evaluations of experiments along with summarized experimental data and peer-reviewed benchmark specifications to support the validation of neutronics codes, nuclear cross-section data, and the validation of reactor designs. Participation in the benchmark process not only benefits those who use these Handbooks within the international community, but provides the individual with opportunities for professional development, networking with an international community of experts, and valuable experience to be used in future employment. Traditionally students have participated in benchmarking activities via internships at national laboratories, universities, or companies involved with the ICSBEP and IRPhEP programs. Additional programs have been developed to facilitate the nuclear education of students while participating in the benchmark projects. These programs include coordination with the Center for Space Nuclear Research (CSNR) Next Degree Program, the Collaboration with the Department of Energy Idaho Operations Office to train nuclear and criticality safety engineers, and student evaluations as the basis for their Master's thesis in nuclear engineering.

  9. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1.1 with MCNP6

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2012-01-01

    Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such

  10. Evaluation of the concrete shield compositions from the 2010 criticality accident alarm system benchmark experiments at the CEA Valduc SILENE facility

    International Nuclear Information System (INIS)

    Miller, Thomas Martin; Celik, Cihangir; Dunn, Michael E; Wagner, John C; McMahan, Kimberly L; Authier, Nicolas; Jacquet, Xavier; Rousseau, Guillaume; Wolff, Herve; Savanier, Laurence; Baclet, Nathalie; Lee, Yi-kang; Trama, Jean-Christophe; Masse, Veronique; Gagnier, Emmanuel; Naury, Sylvie; Blanc-Tranchant, Patrick; Hunter, Richard; Kim, Soon; Dulik, George Michael; Reynolds, Kevin H.

    2015-01-01

    In October 2010, a series of benchmark experiments were conducted at the French Commissariat a l'Energie Atomique et aux Energies Alternatives (CEA) Valduc SILENE facility. These experiments were a joint effort between the United States Department of Energy Nuclear Criticality Safety Program and the CEA. The purpose of these experiments was to create three benchmarks for the verification and validation of radiation transport codes and evaluated nuclear data used in the analysis of criticality accident alarm systems. This series of experiments consisted of three single-pulsed experiments with the SILENE reactor. For the first experiment, the reactor was bare (unshielded), whereas in the second and third experiments, it was shielded by lead and polyethylene, respectively. The polyethylene shield of the third experiment had a cadmium liner on its internal and external surfaces, which vertically was located near the fuel region of SILENE. During each experiment, several neutron activation foils and thermoluminescent dosimeters (TLDs) were placed around the reactor. Nearly half of the foils and TLDs had additional high-density magnetite concrete, high-density barite concrete, standard concrete, and/or BoroBond shields. CEA Saclay provided all the concrete, and the US Y-12 National Security Complex provided the BoroBond. Measurement data from the experiments were published at the 2011 International Conference on Nuclear Criticality (ICNC 2011) and the 2013 Nuclear Criticality Safety Division (NCSD 2013) topical meeting. Preliminary computational results for the first experiment were presented in the ICNC 2011 paper, which showed poor agreement between the computational results and the measured values of the foils shielded by concrete. Recently the hydrogen content, boron content, and density of these concrete shields were further investigated within the constraints of the previously available data. New computational results for the first experiment are now available

  11. Measuring Distribution Performance? Benchmarking Warrants Your Attention

    Energy Technology Data Exchange (ETDEWEB)

    Ericson, Sean J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Alvarez, Paul [The Wired Group

    2018-04-13

    Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.

  12. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  13. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  14. Forecast of criticality experiments and experimental programs needed to support nuclear operations in the United States of America: 1994-1999

    International Nuclear Information System (INIS)

    Rutherford, D.

    1995-01-01

    This Forecast is generated by the Chair of the Experiment Needs Identification Workgroup (ENIWG), with input from Department of Energy and the nuclear community. One of the current concerns addressed by ENIWG was the Defense Nuclear Facilities Safety Board's Recommendation 93-2. This Recommendation delineated the need for a critical experimental capability, which includes (1) a program of general-purpose experiments, (2) improving the information base, and (3) ongoing departmental programs. The nuclear community also recognizes the importance of criticality theory, which, as a stepping stone to computational analysis and safety code development, needs to be benchmarked against well-characterized critical experiments. A summary projection of the Department's needs with respect to criticality information includes (1) hands-on training, (2) criticality and nuclear data, (3) detector systems, (4) uranium- and plutonium-based reactors, and (5) accident analysis. The Workgroup has evaluated, prioritized, and categorized each proposed experiment and program. Transportation/Applications is a new category intended to cover the areas of storage, training, emergency response, and standards. This category has the highest number of priority-1 experiments (nine). Facilities capable of performing experiments include the Los Alamos Critical Experiment Facility (LACEF) along with Area V at Sandia National Laboratory. The LACEF continues to house the most significant collection of critical assemblies in the Western Hemisphere. The staff of this facility and Area V are trained and certified, and documentation is current. ENIWG will continue to work with the nuclear community to identify and prioritize experiments because there is an overwhelming need for critical experiments to be performed for basic research and code validation

  15. Forecast of criticality experiments and experimental programs needed to support nuclear operations in the United States of America: 1994--1999

    International Nuclear Information System (INIS)

    Rutherford, D.

    1994-03-01

    This Forecast is generated by the Chair of the Experiment Needs Identification Workgroup (ENIWG), with input from Department of Energy and the nuclear community. One of the current concerns addressed by ENIWG was the Defense Nuclear Facilities Safety Board's Recommendation 93-2. This Recommendation delineated the need for a critical experimental capability, which includes (1) a program of general-purpose experiments, (2) improving the information base, and (3) ongoing departmental programs. The nuclear community also recognizes the importance of criticality theory, which, as a stepping stone to computational analysis and safety code development, needs to be benchmarked against well-characterized critical experiments. A summary project of the Department's needs with respect to criticality information includes (1) hands-on training, (2) criticality and nuclear data, (3) detector systems, (4) uranium- and plutonium-based reactors, and (5) accident analysis. The Workgroup has evaluated, prioritized, and categorized each proposed experiment and program. Transportation/Applications is a new category intended to cover the areas of storage, training, emergency response, and standards. This category has the highest number of priority-1 experiments (nine). Facilities capable of performing experiments include the Los Alamos Critical Experiment Facility (LACEF) along with Area V at Sandia National Laboratory. The LACEF continues to house the most significant collection of critical assemblies in the Western Hemisphere. The staff of this facility and Area V are trained and certified, and documentation is current. ENIWG will continue to work with the nuclear community to identify and prioritize experiments because there is an overwhelming need for critical experiments to be performed for basic research and code validation

  16. Benchmarks and Quality Assurance for Online Course Development in Higher Education

    Science.gov (United States)

    Wang, Hong

    2008-01-01

    As online education has entered the main stream of the U.S. higher education, quality assurance in online course development has become a critical topic in distance education. This short article summarizes the major benchmarks related to online course development, listing and comparing the benchmarks of the National Education Association (NEA),…

  17. Interior beam searchlight semi-analytical benchmark

    International Nuclear Information System (INIS)

    Ganapol, Barry D.; Kornreich, Drew E.

    2008-01-01

    Multidimensional semi-analytical benchmarks to provide highly accurate standards to assess routine numerical particle transport algorithms are few and far between. Because of the well-established 1D theory for the analytical solution of the transport equation, it is sometimes possible to 'bootstrap' a 1D solution to generate a more comprehensive solution representation. Here, we consider the searchlight problem (SLP) as a multidimensional benchmark. A variation of the usual SLP is the interior beam SLP (IBSLP) where a beam source lies beneath the surface of a half space and emits directly towards the free surface. We consider the establishment of a new semi-analytical benchmark based on a new FN formulation. This problem is important in radiative transfer experimental analysis to determine cloud absorption and scattering properties. (authors)

  18. Benchmark referencing of neutron dosimetry measurements

    International Nuclear Information System (INIS)

    Eisenhauer, C.M.; Grundl, J.A.; Gilliam, D.M.; McGarry, E.D.; Spiegel, V.

    1980-01-01

    The concept of benchmark referencing involves interpretation of dosimetry measurements in applied neutron fields in terms of similar measurements in benchmark fields whose neutron spectra and intensity are well known. The main advantage of benchmark referencing is that it minimizes or eliminates many types of experimental uncertainties such as those associated with absolute detection efficiencies and cross sections. In this paper we consider the cavity external to the pressure vessel of a power reactor as an example of an applied field. The pressure vessel cavity is an accessible location for exploratory dosimetry measurements aimed at understanding embrittlement of pressure vessel steel. Comparisons with calculated predictions of neutron fluence and spectra in the cavity provide a valuable check of the computational methods used to estimate pressure vessel safety margins for pressure vessel lifetimes

  19. Testing of cross section libraries for TRIGA criticality benchmark

    International Nuclear Information System (INIS)

    Snoj, L.; Trkov, A.; Ravnik, M.

    2007-01-01

    Influence of various up-to-date cross section libraries on the multiplication factor of TRIGA benchmark as well as the influence of fuel composition on the multiplication factor of the system composed of various types of TRIGA fuel elements was investigated. It was observed that keff calculated by using the ENDF/B VII cross section library is systematically higher than using the ENDF/B-VI cross section library. The main contributions (∼ 2 20 pcm) are from 235 U and Zr. (author)

  20. Measurement, Standards, and Peer Benchmarking: One Hospital's Journey.

    Science.gov (United States)

    Martin, Brian S; Arbore, Mark

    2016-04-01

    Peer-to-peer benchmarking is an important component of rapid-cycle performance improvement in patient safety and quality-improvement efforts. Institutions should carefully examine critical success factors before engagement in peer-to-peer benchmarking in order to maximize growth and change opportunities. Solutions for Patient Safety has proven to be a high-yield engagement for Children's Hospital of Pittsburgh of University of Pittsburgh Medical Center, with measureable improvement in both organizational process and culture. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Framework for benchmarking online retailing performance using fuzzy AHP and TOPSIS method

    Directory of Open Access Journals (Sweden)

    M. Ahsan Akhtar Hasin

    2012-08-01

    Full Text Available Due to increasing penetration of internet connectivity, on-line retail is growing from the pioneer phase to increasing integration within people's lives and companies' normal business practices. In the increasingly competitive environment, on-line retail service providers require systematic and structured approach to have cutting edge over the rival. Thus, the use of benchmarking has become indispensable to accomplish superior performance to support the on-line retail service providers. This paper uses the fuzzy analytic hierarchy process (FAHP approach to support a generic on-line retail benchmarking process. Critical success factors for on-line retail service have been identified from a structured questionnaire and literature and prioritized using fuzzy AHP. Using these critical success factors, performance levels of the ORENET an on-line retail service provider is benchmarked along with four other on-line service providers using TOPSIS method. Based on the benchmark, their relative ranking has also been illustrated.

  2. Concrete reflected cylinders of highly enriched solutions of uranyl nitrate ICSBEP Benchmark: A re-evaluation by means of MCNPX using ENDF/B-VI cross section library

    International Nuclear Information System (INIS)

    Cruzate, J.A.; Carelli, J.L.

    2011-01-01

    This work presents a theoretical re-evaluation of a set of original experiments included in the 2009 issue of the International Handbook of Evaluated Criticality Safety Benchmark Experiments, as “Concrete Reflected Cylinders of Highly Enriched Solutions of Uranyl Nitrate” (identification number: HEU-SOL-THERM- 002) [4]. The present evaluation has been made according to benchmark specifications [4], and added data taken out of the original published report [3], but applying a different approach, resulting in a more realistic calculation model. In addition, calculations have been made using the latest version of MCNPX Monte Carlo code, combined with an updated set of cross section data, the continuous-energy ENDF/B-VI library. This has resulted in a comprehensive model for the given experimental situation. Uncertainties analysis has been made based on the evaluation of experimental data presented in the HEU-SOLTHERM-002 report. Resulting calculations with the present improved physical model have been able to reproduce the criticality of configurations within 0.5%, in good agreement with experimental data. Results obtained in the analysis of uncertainties are in general agreement with those at HEU-SOL-THERM-002 benchmark document. Qualitative results from analyses made in the present work can be extended to similar fissile systems: well moderated units of 235 U solutions, reflected with concrete from all directions. Results have confirmed that neutron absorbers, even as impurities, must be taken into account in calculations if at least approximate proportions were known. (authors)

  3. Benchmark matrix and guide: Part II.

    Science.gov (United States)

    1991-01-01

    In the last issue of the Journal of Quality Assurance (September/October 1991, Volume 13, Number 5, pp. 14-19), the benchmark matrix developed by Headquarters Air Force Logistics Command was published. Five horizontal levels on the matrix delineate progress in TQM: business as usual, initiation, implementation, expansion, and integration. The six vertical categories that are critical to the success of TQM are leadership, structure, training, recognition, process improvement, and customer focus. In this issue, "Benchmark Matrix and Guide: Part II" will show specifically how to apply the categories of leadership, structure, and training to the benchmark matrix progress levels. At the intersection of each category and level, specific behavior objectives are listed with supporting behaviors and guidelines. Some categories will have objectives that are relatively easy to accomplish, allowing quick progress from one level to the next. Other categories will take considerable time and effort to complete. In the next issue, Part III of this series will focus on recognition, process improvement, and customer focus.

  4. Development and validation of a criticality calculation scheme based on French deterministic transport codes

    International Nuclear Information System (INIS)

    Santamarina, A.

    1991-01-01

    A criticality-safety calculational scheme using the automated deterministic code system, APOLLO-BISTRO, has been developed. The cell/assembly code APOLLO is used mainly in LWR and HCR design calculations, and its validation spans a wide range of moderation ratios, including voided configurations. Its recent 99-group library and self-shielded cross-sections has been extensively qualified through critical experiments and PWR spent fuel analysis. The PIC self-shielding formalism enables a rigorous treatment of the fuel double heterogeneity in dissolver medium calculations. BISTRO is an optimized multidimensional SN code, part of the modular CCRR package used mainly in FBR calculations. The APOLLO-BISTRO scheme was applied to the 18 experimental benchmarks selected by the OECD/NEACRP Criticality Calculation Working Group. The Calculation-Experiment discrepancy was within ± 1% in ΔK/K and always looked consistent with the experimental uncertainty margin. In the critical experiments corresponding to a dissolver type benchmark, our tools computed a satisfactory Keff. In the VALDUC fuel storage experiments, with hafnium plates, the computed Keff ranged between 0.994 and 1.003 for the various watergaps spacing the fuel clusters from the absorber plates. The APOLLO-KENOEUR statistic calculational scheme, based on the same self-shielded multigroup library, supplied consistent results within 0.3% in ΔK/K. (Author)

  5. Experimental benchmark for piping system dynamic-response analyses

    International Nuclear Information System (INIS)

    1981-01-01

    This paper describes the scope and status of a piping system dynamics test program. A 0.20 m(8 in.) nominal diameter test piping specimen is designed to be representative of main heat transport system piping of LMFBR plants. Particular attention is given to representing piping restraints. Applied loadings consider component-induced vibration as well as seismic excitation. The principal objective of the program is to provide a benchmark for verification of piping design methods by correlation of predicted and measured responses. Pre-test analysis results and correlation methods are discussed

  6. Experimental benchmark for piping system dynamic response analyses

    International Nuclear Information System (INIS)

    Schott, G.A.; Mallett, R.H.

    1981-01-01

    The scope and status of a piping system dynamics test program are described. A 0.20-m nominal diameter test piping specimen is designed to be representative of main heat transport system piping of LMFBR plants. Attention is given to representing piping restraints. Applied loadings consider component-induced vibration as well as seismic excitation. The principal objective of the program is to provide a benchmark for verification of piping design methods by correlation of predicted and measured responses. Pre-test analysis results and correlation methods are discussed. 3 refs

  7. Analytical Radiation Transport Benchmarks for The Next Century

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    2005-01-01

    Verification of large-scale computational algorithms used in nuclear engineering and radiological applications is an essential element of reliable code performance. For this reason, the development of a suite of multidimensional semi-analytical benchmarks has been undertaken to provide independent verification of proper operation of codes dealing with the transport of neutral particles. The benchmarks considered cover several one-dimensional, multidimensional, monoenergetic and multigroup, fixed source and critical transport scenarios. The first approach, called the Green's Function. In slab geometry, the Green's function is incorporated into a set of integral equations for the boundary fluxes. Through a numerical Fourier transform inversion and subsequent matrix inversion for the boundary fluxes, a semi-analytical benchmark emerges. Multidimensional solutions in a variety of infinite media are also based on the slab Green's function. In a second approach, a new converged SN method is developed. In this method, the SN solution is ''minded'' to bring out hidden high quality solutions. For this case multigroup fixed source and criticality transport problems are considered. Remarkably accurate solutions can be obtained with this new method called the Multigroup Converged SN (MGCSN) method as will be demonstrated

  8. The benchmark testing of 9Be of CENDL-3

    International Nuclear Information System (INIS)

    Liu Ping

    2002-01-01

    CENDL-3, the latest version of China Evaluated Nuclear Data Library was finished. The data of 9 Be were updated, and distributed for benchmark analysis recently. The calculated results were presented, and compared with the experimental data and the results based on other evaluated nuclear data libraries. The results show that CENDL-3 is better than others for most benchmarks

  9. Lesson learned from the SARNET wall condensation benchmarks

    International Nuclear Information System (INIS)

    Ambrosini, W.; Forgione, N.; Merli, F.; Oriolo, F.; Paci, S.; Kljenak, I.; Kostka, P.; Vyskocil, L.; Travis, J.R.; Lehmkuhl, J.; Kelm, S.; Chin, Y.-S.; Bucci, M.

    2014-01-01

    Highlights: • The results of the benchmarking activity on wall condensation are reported. • The work was performed in the frame of SARNET. • General modelling techniques for condensation are discussed. • Results of University of Pisa and of other benchmark participants are discussed. • The lesson learned is drawn. - Abstract: The prediction of condensation in the presence of noncondensable gases has received continuing attention in the frame of the Severe Accident Research Network of Excellence, both in the first (2004–2008) and in the second (2009–2013) EC integrated projects. Among the different reasons for considering so relevant this basic phenomenon, coped with by classical treatments dated in the first decades of the last century, there is the interest for developing updated CFD models for reactor containment analysis, requiring validating at a different level the available modelling techniques. In the frame of SARNET, benchmarking activities were undertaken taking advantage of the work performed at different institutions in setting up and developing models for steam condensation in conditions of interest for nuclear reactor containment. Four steps were performed in the activity, involving: (1) an idealized problem freely inspired at the actual conditions occurring in an experimental facility, CONAN, installed at the University of Pisa; (2) a first comparison with experimental data purposely collected by the CONAN facility; (3) a second comparison with data available from experimental campaigns performed in the same apparatus before the inclusion of the activities in SARNET; (4) a third exercise involving data obtained at lower mixture velocity than in previous campaigns, aimed at providing conditions closer to those addressed in reactor containment analyses. The last step of the benchmarking activity required to change the configuration of the experimental apparatus to achieve the lower flow rates involved in the new test specifications. The

  10. Creation of a simplified benchmark model for the neptunium sphere experiment

    International Nuclear Information System (INIS)

    Mosteller, Russell D.; Loaiza, David J.; Sanchez, Rene G.

    2004-01-01

    Although neptunium is produced in significant amounts by nuclear power reactors, its critical mass is not well known. In addition, sizeable uncertainties exist for its cross sections. As an important step toward resolution of these issues, a critical experiment was conducted in 2002 at the Los Alamos Critical Experiments Facility. In the experiment, a 6-kg sphere of 237 Np was surrounded by nested hemispherical shells of highly enriched uranium. The shells were required in order to reach a critical condition. Subsequently, a detailed model of the experiment was developed. This model faithfully reproduces the components of the experiment, but it is geometrically complex. Furthermore, the isotopics analysis upon which that model is based omits nearly 1 % of the mass of the sphere. A simplified benchmark model has been constructed that retains all of the neutronically important aspects of the detailed model and substantially reduces the computer resources required for the calculation. The reactivity impact, of each of the simplifications is quantified, including the effect of the missing mass. A complete set of specifications for the benchmark is included in the full paper. Both the detailed and simplified benchmark models underpredict k eff by more than 1% Δk. This discrepancy supports the suspicion that better cross sections are needed for 237 Np.

  11. Experimental Study on Critical Power in a Hemispherical Narrow Gap

    International Nuclear Information System (INIS)

    Park, Rae-Joon; Ha, Kwang-Soon; Kim, Sang-Baik; Kim, Hee-Dong; Jeong, Ji-Hwan

    2002-01-01

    An experimental study of critical heat flux in gap (CHFG) has been performed to investigate the inherent cooling mechanism in a hemispherical narrow gap. The objectives of the CHFG test are to measure critical power from a critical heat removal rate through the hemispherical narrow gap using distilled water with experimental parameters of system pressure and gap width. The CHFG test results have shown that a countercurrent flow limitation (CCFL) brings about local dryout at the small edge region of the upper part and finally global dryout in a hemispherical narrow gap. Increases in the gap width and pressure lead to an increase in critical power. The measured values of critical power are lower than the predictions made by other empirical CHF correlations applicable to flat plate, annuli, and small spherical gaps. The measured data on critical power in the hemispherical narrow gaps have been correlated using nondimensional parameters with a range of approximately ±20%. The developed correlation has been expanded to apply the spherical geometry using the Siemens/KWU correlation

  12. Quality management benchmarking: FDA compliance in pharmaceutical industry.

    Science.gov (United States)

    Jochem, Roland; Landgraf, Katja

    2010-01-01

    By analyzing and comparing industry and business best practice, processes can be optimized and become more successful mainly because efficiency and competitiveness increase. This paper aims to focus on some examples. Case studies are used to show knowledge exchange in the pharmaceutical industry. Best practice solutions were identified in two companies using a benchmarking method and five-stage model. Despite large administrations, there is much potential regarding business process organization. This project makes it possible for participants to fully understand their business processes. The benchmarking method gives an opportunity to critically analyze value chains (a string of companies or players working together to satisfy market demands for a special product). Knowledge exchange is interesting for companies that like to be global players. Benchmarking supports information exchange and improves competitive ability between different enterprises. Findings suggest that the five-stage model improves efficiency and effectiveness. Furthermore, the model increases the chances for reaching targets. The method gives security to partners that did not have benchmarking experience. The study identifies new quality management procedures. Process management and especially benchmarking is shown to support pharmaceutical industry improvements.

  13. Proceedings of the workshop on integral experiment covariance data for critical safety validation

    Energy Technology Data Exchange (ETDEWEB)

    Stuke, Maik (ed.)

    2016-04-15

    For some time, attempts to quantify the statistical dependencies of critical experiments and to account for them properly in validation procedures were discussed in the literature by various groups. Besides the development of suitable methods especially the quality and modeling issues of the freely available experimental data are in the focus of current discussions, carried out for example in the Expert Group on Uncertainty Analysis for Criticality Safety Assessment (UACSA) of the OECD-NEA Nuclear Science Committee. The same committee compiles and publishes also the freely available experimental data in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. Most of these experiments were performed as series and might share parts of experimental setups leading to correlated results. The quality of the determination of these correlations and the underlying covariance data depend strongly on the quality of the documentation of experiments.

  14. Proceedings of the workshop on integral experiment covariance data for critical safety validation

    International Nuclear Information System (INIS)

    Stuke, Maik

    2016-04-01

    For some time, attempts to quantify the statistical dependencies of critical experiments and to account for them properly in validation procedures were discussed in the literature by various groups. Besides the development of suitable methods especially the quality and modeling issues of the freely available experimental data are in the focus of current discussions, carried out for example in the Expert Group on Uncertainty Analysis for Criticality Safety Assessment (UACSA) of the OECD-NEA Nuclear Science Committee. The same committee compiles and publishes also the freely available experimental data in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. Most of these experiments were performed as series and might share parts of experimental setups leading to correlated results. The quality of the determination of these correlations and the underlying covariance data depend strongly on the quality of the documentation of experiments.

  15. Forecast of criticality experiments and experimental programs needed to support nuclear operations in the United States of America: 1994--1999

    Energy Technology Data Exchange (ETDEWEB)

    Rutherford, D.

    1994-03-01

    This Forecast is generated by the Chair of the Experiment Needs Identification Workgroup (ENIWG), with input from Department of Energy and the nuclear community. One of the current concerns addressed by ENIWG was the Defense Nuclear Facilities Safety Board`s Recommendation 93-2. This Recommendation delineated the need for a critical experimental capability, which includes (1) a program of general-purpose experiments, (2) improving the information base, and (3) ongoing departmental programs. The nuclear community also recognizes the importance of criticality theory, which, as a stepping stone to computational analysis and safety code development, needs to be benchmarked against well-characterized critical experiments. A summary project of the Department`s needs with respect to criticality information includes (1) hands-on training, (2) criticality and nuclear data, (3) detector systems, (4) uranium- and plutonium-based reactors, and (5) accident analysis. The Workgroup has evaluated, prioritized, and categorized each proposed experiment and program. Transportation/Applications is a new category intended to cover the areas of storage, training, emergency response, and standards. This category has the highest number of priority-1 experiments (nine). Facilities capable of performing experiments include the Los Alamos Critical Experiment Facility (LACEF) along with Area V at Sandia National Laboratory. The LACEF continues to house the most significant collection of critical assemblies in the Western Hemisphere. The staff of this facility and Area V are trained and certified, and documentation is current. ENIWG will continue to work with the nuclear community to identify and prioritize experiments because there is an overwhelming need for critical experiments to be performed for basic research and code validation.

  16. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  17. Benchmark experiments of effective delayed neutron fraction βeff at FCA

    International Nuclear Information System (INIS)

    Sakurai, Takeshi; Okajima, Shigeaki

    1999-01-01

    Benchmark experiments of effective delayed neutron fraction β eff were performed at Fast Critical Assembly (FCA) in the Japan Atomic Energy Research Institute. The experiments were made in three cores providing systematic change of nuclide contribution to the β eff : XIX-1 core fueled with 93% enriched uranium, XIX-2 core fueled with plutonium and uranium (23% enrichment) and XIX-3 core fueled with plutonium (92% fissile Pu). Six organizations from five countries participated in these experiments and measured the β eff by using their own methods and instruments. Target accuracy in the β eff was achieved to be better than ±3% by averaging the β eff values measured using a wide variety of experimental methods. (author)

  18. Production of neutron cross section library based on JENDL-4.0 to continuous-energy Monte Carlo code MVP and its application to criticality analysis of benchmark problems in the ICSBEP handbook

    International Nuclear Information System (INIS)

    Okumura, Keisuke; Nagaya, Yasunobu

    2011-09-01

    In May 2010, JENDL-4.0 was released from Japan Atomic Energy Agency as the updated Japanese Nuclear Data Library. It was processed by the nuclear data processing system LICEM and an arbitrary-temperature neutron cross section library MVPlib - nJ40 was produced for the neutron and photon transport calculation code MVP based on the continuous-energy Monte Carlo method. The library contains neutron cross sections for 406 nuclides on the free gas model, thermal scattering cross sections, and cross sections of pseudo fission products for burn-up calculations with MVP. Criticality benchmark calculations were carried out with MVP and MVPlib - nJ40 for about 1,000 cases of critical experiments stored in the hand book of International Criticality Safety Benchmark Evaluation Project (ICSBEP), which covers a wide variety of fuel materials, fuel forms, and neutron spectra. We report all comparison results (C/E values) of effective neutron multiplication factors between calculations and experiments to give a validation data for the prediction accuracy of JENDL-4.0 for criticalities. (author)

  19. Evaluation of Saxton critical experiments

    Energy Technology Data Exchange (ETDEWEB)

    Joo, Hyung Kook; Noh, Jae Man; Jung, Hyung Guk; Kim, Young Il; Kim, Young Jin [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    As a part of International Criticality Safety Benchmark Evaluation Project (ICSBEP), SAXTON critical experiments were reevaluated. The effects of k{sub eff} of the uncertainties in experiment parameters, fuel rod characterization, soluble boron, critical water level, core structure, {sup 241}Am and {sup 241}Pu isotope number densities, random pitch error, duplicated experiment, axial fuel position, model simplification, etc., were evaluated and added in benchmark-model k{sub eff}. In addition to detailed model, the simplified model for Saxton critical experiments was constructed by omitting the top, middle, and bottom grids and ignoring the fuel above water. 6 refs., 1 fig., 3 tabs. (Author)

  20. Evaluation of Saxton critical experiments

    Energy Technology Data Exchange (ETDEWEB)

    Joo, Hyung Kook; Noh, Jae Man; Jung, Hyung Guk; Kim, Young Il; Kim, Young Jin [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    As a part of International Criticality Safety Benchmark Evaluation Project (ICSBEP), SAXTON critical experiments were reevaluated. The effects of k{sub eff} of the uncertainties in experiment parameters, fuel rod characterization, soluble boron, critical water level, core structure, {sup 241}Am and {sup 241}Pu isotope number densities, random pitch error, duplicated experiment, axial fuel position, model simplification, etc., were evaluated and added in benchmark-model k{sub eff}. In addition to detailed model, the simplified model for Saxton critical experiments was constructed by omitting the top, middle, and bottom grids and ignoring the fuel above water. 6 refs., 1 fig., 3 tabs. (Author)

  1. Fast and thermal data testing of 233U critical assemblies

    International Nuclear Information System (INIS)

    Wright, R.Q.; Jordan, W.C.; Leal, L.C.

    1999-01-01

    Many sources have been used to obtain 233 U benchmark descriptions. Unfortunately, some of these are not reliable since a thorough and complete benchmark evaluation often has not been done. For 24 yr a principal source for 233 U benchmarks has been the Cross Section Evaluation Working Group (CSEWG) Benchmark Specifications. The CSEWG specifications included only two fast benchmarks and three thermal benchmarks. The thermal benchmarks were H 2 O-moderated thorium-oxide exponential lattices. Since the thorium-oxide lattices were exponential experiments, they have not been widely used. CSEWG has also used the 233 U Oak Ridge National Laboratory (ORNL) spheres for many years. One advantage of the CSEWG fast benchmarks, JEZEBEL-23 and FLATTOP-23, is that experiments were done for central-reaction-rate ratios. These reaction-rate ratios provide very valuable information to data testers and evaluators that would not otherwise be available. In recent years the International Handbook of Evaluated Criticality Safety Benchmark Experiments has, in general, been a very useful and reliable source. The Handbook does not include central-reaction-rate ratio experiments, however. A new set of 233 U benchmark experiments has been added to the most recent release of the Handbook, U233-SOL-THERM-004. These are paraffin-reflected cylinders of 233 U uranyl-nitrate solutions. Unfortunately, the estimated benchmark uncertainties are on the order of 0.9 to 1.0% in k eff . Benchmark testing has been done for some of these U233-SOL-THERM-004 experiments. The authors have also discovered that the benchmark specifications for the Thomas uranyl-nitrate experiments given in Ref. 5 are incorrect. One problem with the Ref. 5 specifications is that the excess acid was not included. As part of this work, the authors developed revised specifications that include an excess acid correlation based on information from the experimental logbook

  2. Benchmark calculations for VENUS-2 MOX -fueled reactor dosimetry

    International Nuclear Information System (INIS)

    Kim, Jong Kung; Kim, Hong Chul; Shin, Chang Ho; Han, Chi Young; Na, Byung Chan

    2004-01-01

    As a part of a Nuclear Energy Agency (NEA) Project, it was pursued the benchmark for dosimetry calculation of the VENUS-2 MOX-fueled reactor. In this benchmark, the goal is to test the current state-of-the-art computational methods of calculating neutron flux to reactor components against the measured data of the VENUS-2 MOX-fuelled critical experiments. The measured data to be used for this benchmark are the equivalent fission fluxes which are the reaction rates divided by the U 235 fission spectrum averaged cross-section of the corresponding dosimeter. The present benchmark is, therefore, defined to calculate reaction rates and corresponding equivalent fission fluxes measured on the core-mid plane at specific positions outside the core of the VENUS-2 MOX-fuelled reactor. This is a follow-up exercise to the previously completed UO 2 -fuelled VENUS-1 two-dimensional and VENUS-3 three-dimensional exercises. The use of MOX fuel in LWRs presents different neutron characteristics and this is the main interest of the current benchmark compared to the previous ones

  3. Effects of uncertainties of experimental data in the benchmarking of a computer code

    International Nuclear Information System (INIS)

    Meulemeester, E. de; Bouffioux, P.; Demeester, J.

    1980-01-01

    Fuel rod performance modelling is sometimes taken in an academical way. The experience of the COMETHE code development since 1967 has clearly shown that benchmarking was the most important part of modelling development. Unfortunately, it requires well characterized data. Although, the two examples presented here were not intended for benchmarking, as the COMETHE calculations were only performed for an interpretation of the results, they illustrate the effects of a lack of fuel characterization and of the power history uncertainties

  4. MCNP5 CRITICALITY VALIDATION AND BIAS FOR INTERMEDIATE ENRICHED URANIUM SYSTEMS

    International Nuclear Information System (INIS)

    Finfrock, S.H.

    2009-01-01

    The purpose of this analysis is to validate the Monte Carlo N-Particle 5 (MCNP5) code Version 1.40 (LA-UR-03-1987, 2005) and its cross-section database for k-code calculations of intermediate enriched uranium systems on INTEL(reg s ign) processor based PC's running any version of the WINDOWS operating system. Configurations with intermediate enriched uranium were modeled with the moderator range of 39 (le) H/Fissile (le) 1438. See Table 2-1 for brief descriptions of selected cases and Table 3-1 for the range of applicability for this validation. A total of 167 input cases were evaluated including bare and reflected systems in a single body or arrays. The 167 cases were taken directly from the previous (Version 4C [Lan 2005]) validation database. Section 2.0 list data used to calculate k-effective (k eff ) for the 167 experimental criticality benchmark cases using the MCNP5 code v1.40 and its cross section database. Appendix B lists the MCNP cross-section database entries validated for use in evaluating the intermediate enriched uranium systems for criticality safety. The dimensions and atom densities for the intermediate enriched uranium experiments were taken from NEA/NSC/DOC(95)03, September 2005, which will be referred to as the benchmark handbook throughout the report. For these input values, the experimental benchmark k eff is approximately 1.0. The MCNP validation computer runs ran to an accuracy of approximately ± 0.001. For the cases where the reported benchmark k eff was not equal to 1.0000 the MCNP calculational results were normalized. The difference between the MCNP validation computer runs and the experimentally measured k eff is the MCNP5 v1.40 bias. The USLSTATS code (ORNL 1998) was utilized to perform the statistical analysis and generate an acceptable maximum k eff limit for calculations of the intermediate enriched uranium type systems.

  5. Joint European contribution to phase 5 of the BN600 hybrid reactor benchmark core analysis (European ERANOS formulaire for fast reactor core analysis)

    International Nuclear Information System (INIS)

    Rimpault, G.

    2004-01-01

    Hybrid UOX/MOX fueled core of the BN-600 reactor was endorsed as an international benchmark. BFS-2 critical facility was designed for full size simulation of core and shielding of large fast reactors (up tp 3000 MWe). Wide experimental programme including measurements of criticality, fission rates, rod worths, and SVRE was established. Four BFS-62 critical assemblies have been designed to study changes in BN-600 reactor physics-when moving to a hybrid MOX core. BFS-62-3A assembly is a full scale model of the BN-600 reactor hybrid core. it consists of three regions of UO 2 fuel, axial and radial fertile blankets, MOX fuel added in a ring between MC and OC zones, 120 deg sector of stainless steel reflector included within radial blanket. Joint European contribution to the Phase 5 benchmark analysis was performed by Serco Assurance Winfrith (UK) and CEA Cadarache (France). Analysis was carried out using Version 1.2 of the ERANOS code; and data system for advanced and fast reactor core applications. Nuclear data is based on the JEF2.2 nuclear data evaluation (including sodium). Results for Phase 5 of the BN-600 benchmark have been determined for criticality and SVRE in both diffusion and transport theory. Full details of the results are presented in a paper posted on the IAEA Business Collaborator website nad a brief summary is provided in this paper

  6. Benchmarking is associated with improved quality of care in type 2 diabetes: the OPTIMISE randomized, controlled trial.

    Science.gov (United States)

    Hermans, Michel P; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos

    2013-11-01

    To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile.

  7. HTR-PROTEUS pebble bed experimental program cores 9 & 10: columnar hexagonal point-on-point packing with a 1:1 moderator-to-fuel pebble ratio

    Energy Technology Data Exchange (ETDEWEB)

    Bess, John D. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen critical configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.

  8. HTR-PROTEUS PEBBLE BED EXPERIMENTAL PROGRAM CORES 9 & 10: COLUMNAR HEXAGONAL POINT-ON-POINT PACKING WITH A 1:1 MODERATOR-TO-FUEL PEBBLE RATIO

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2013-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen critical configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.

  9. Experiences with installing and benchmarking SCALE 4.0 on workstations

    International Nuclear Information System (INIS)

    Montierth, L.M.; Briggs, J.B.

    1992-01-01

    The advent of economical, high-speed workstations has placed on the criticality engineer's desktop the means to perform computational analysis that was previously possible only on mainframe computers. With this capability comes the need to modify and maintain criticality codes for use on a variety of different workstations. Due to the use of nonstandard coding, compiler differences [in lieu of American National Standards Institute (ANSI) standards], and other machine idiosyncrasies, there is a definite need to systematically test and benchmark all codes ported to workstations. Once benchmarked, a user environment must be maintained to ensure that user code does not become corrupted. The goal in creating a workstation version of the criticality safety analysis sequence (CSAS) codes in SCALE 4.0 was to start with the Cray versions and change as little source code as possible yet produce as generic a code as possible. To date, this code has been ported to the IBM RISC 6000, Data General AViiON 400, Silicon Graphics 4D-35 (all using the same source code), and to the Hewlett Packard Series 700 workstations. The code is maintained under a configuration control procedure. In this paper, the authors address considerations that pertain to the installation and benchmarking of CSAS

  10. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  11. Benchmark experiments on a lead reflected system and calculations on the geometry of the experimental facility using most of the commonly available nuclear data libraries

    International Nuclear Information System (INIS)

    Guillemot, M.; Colomb, G.

    1985-01-01

    A series of criticality benchmark experiments with a small LWR-type core, reflected by 30 cm of lead, was defined jointly by SEC (Service d'Etude de Criticite), Fontenay-aux-Roses, and SRD (Safety and Reliability Directorate). These experiments are very representative of the reflecting effect of lead, since the contribution of the lead to the reactivity was assessed as about 30% in Δ K. The experiments were carried out by SRSC (Service de Recherche en Surete et Criticite), Valduc, in December 1983 in the sub-critical facility called APPARATUS B. In addition, they confirmed and measured the effect on reactivity of a water gap between the core and the lead reflector; with a water gap of less than 1 cm, the reactivity can be greater than that of the core directly reflected the lead or by over 20 cm of water. The experimental results were to a large extent made use of by SRD with the aid of the MONK Monte Carlo code and to some extent by SEC with the aid of the MORET Monte Carlo Code. All the results obtained are presented in the summary tables. These experiments allowed to compare the different libraries of cross sections available

  12. Existing experimental criticality data applicable to nuclear-fuel-transportation systems

    International Nuclear Information System (INIS)

    Bierman, S.R.

    1983-02-01

    Analytical techniques are generally relied upon in making criticality evaluations involving nuclear material outside reactors. For these evaluations to be accepted the calculations must be validated by comparison with experimental data for a known set of conditions having physical and neutronic characteristics similar to those conditions being evaluated analytically. The purpose of this report is to identify those existing experimental data that are suitable for use in verifying criticality calculations on nuclear fuel transportation systems. In addition, near term needs for additional data in this area are identified. Of the considerable amount of criticality data currently existing, that are applicable to non-reactor systems, those particularly suitable for use in support of nuclear material transportation systems have been identified and catalogued into the following groups: (1) critical assemblies of fuel rods in water; (2) critical assemblies of fuel rods in water containing soluble neutron absorbers; (3) critical assemblies containing solid neutron absorber; (4) critical assemblies of fuel rods in water with heavy metal reflectors; and (5) critical assemblies of fuel rods in water with irregular features. A listing of the current near term needs for additional data in each of the groups has been developed for future use in planning criticality research in support of nuclear fuel transportation systems. The criticality experiments needed to provide these data are briefly described and identified according to priority and relative cost of performing the experiments

  13. Experimental studies and computational benchmark on heavy liquid metal natural circulation in a full height-scale test loop for small modular reactors

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Yong-Hoon, E-mail: chaotics@snu.ac.kr [Department of Energy Systems Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826 (Korea, Republic of); Cho, Jaehyun [Korea Atomic Energy Research Institute, 111 Daedeok-daero, 989 Beon-gil, Yuseong-gu, Daejeon 34057 (Korea, Republic of); Lee, Jueun; Ju, Heejae; Sohn, Sungjune; Kim, Yeji; Noh, Hyunyub; Hwang, Il Soon [Department of Energy Systems Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826 (Korea, Republic of)

    2017-05-15

    Highlights: • Experimental studies on natural circulation for lead-bismuth eutectic were conducted. • Adiabatic wall boundaries conditions were established by compensating heat loss. • Computational benchmark with a system thermal-hydraulics code was performed. • Numerical simulation and experiment showed good agreement in mass flow rate. • An empirical relation was formulated for mass flow rate with experimental data. - Abstract: In order to test the enhanced safety of small lead-cooled fast reactors, lead-bismuth eutectic (LBE) natural circulation characteristics have been studied. We present results of experiments with LBE non-isothermal natural circulation in a full-height scale test loop, HELIOS (heavy eutectic liquid metal loop for integral test of operability and safety of PEACER), and the validation of a system thermal-hydraulics code. The experimental studies on LBE were conducted under steady state as a function of core power conditions from 9.8 kW to 33.6 kW. Local surface heaters on the main loop were activated and finely tuned by trial-and-error approach to make adiabatic wall boundary conditions. A thermal-hydraulic system code MARS-LBE was validated by using the well-defined benchmark data. It was found that the predictions were mostly in good agreement with the experimental data in terms of mass flow rate and temperature difference that were both within 7%, respectively. With experiment results, an empirical relation predicting mass flow rate at a non-isothermal, adiabatic condition in HELIOS was derived.

  14. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    output includes a plot of the MAAP calculation and the plant data. For the large integral experiments, a major part, but not all of the MAAP code is needed. These use an experiment specific benchmark routine that includes all of the information and boundary conditions for performing the calculation, as well as the information of which parts of MAAP are unnecessary and can be 'bypassed'. Lastly, the separate effects tests only require a few MAAP routines. These are exercised through their own specific benchmark routine that includes the experiment specific information and boundary conditions. This benchmark routine calls the appropriate MAAP routines from the source code, performs the calculations, including integration where necessary and provide the comparison between the MAAP calculation and the experimental observations. (author)

  15. Review of experimental methods for evaluating effective delayed neutron fraction

    Energy Technology Data Exchange (ETDEWEB)

    Yamane, Yoshihiro [Nagoya Univ. (Japan). School of Engineering

    1997-03-01

    The International Effective Delayed Neutron Fraction ({beta}{sub eff}) Benchmark Experiments have been carried out at the Fast Critical Assembly of Japan Atomic Energy Research Institute since 1995. Researchers from six countries, namely France, Italy, Russia, U.S.A., Korea, and Japan, participate in this FCA project. Each team makes use of each experimental method, such as Frequency Method, Rossi-{alpha} Method, Nelson Number Method, Cf Neutron Source Method, and Covariance Method. In this report these experimental methods are reviewed. (author)

  16. Benchmarking local healthcare-associated infections: Available benchmarks and interpretation challenges

    Directory of Open Access Journals (Sweden)

    Aiman El-Saed

    2013-10-01

    Full Text Available Summary: Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI, which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons. Keywords: Benchmarking, Comparison, Surveillance, Healthcare-associated infections

  17. Economic incentives for additional critical experimentation applicable to fuel dissolution

    International Nuclear Information System (INIS)

    Mincey, J.F.; Primm, R.T. III; Waltz, W.R.

    1981-01-01

    Fuel dissolution operations involving soluble absorbers for criticality control are among the most difficult to establish economical subcritical limits. The paucity of applicable experimental data can significantly hinder a precise determination of a bias in the method chosen for calculation of the required soluble absorber concentration. Resorting to overly conservative bias estimates can result in excessive concentrations of soluble absorbers. Such conservatism can be costly, especially if soluble absorbers are used in a throw-away fashion. An economic scoping study is presented which demonstrates that additional critical experimentation will likely lead to reductions in the soluble absorber (i.e., gadolinium) purchase costs for dissolution operations. The results indicate that anticipated savings maybe more than enough to pay for the experimental costs

  18. OECD/NEA burnup credit criticality benchmarks phase IIIB: Burnup calculations of BWR fuel assemblies for storage and transport

    International Nuclear Information System (INIS)

    Okuno, Hiroshi; Naito, Yoshitaka; Suyama, Kenya

    2002-02-01

    The report describes the final results of the Phase IIIB Benchmark conducted by the Expert Group on Burnup Credit Criticality Safety under the auspices of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD). The Benchmark was intended to compare the predictability of current computer code and data library combinations for the atomic number densities of an irradiated PWR fuel assembly model. The fuel assembly was irradiated under specific power of 25.6 MW/tHM up to 40 GWd/tHM and cooled for five years. The void fraction was assumed to be uniform throughout the channel box and constant, at 0, 40 and 70%, during burnup. In total, 16 results were submitted from 13 institutes of 7 countries. The calculated atomic number densities of 12 actinides and 20 fission product nuclides were found to be for the most part within a range of ±10% relative to the average, although some results, esp. 155 Eu and gadolinium isotopes, exceeded the band, which will require further investigation. Pin-wise burnup results agreed well among the participants. The results in the infinite neutron multiplication factor k ∞ also accorded well with each other for void fractions of 0 and 40%; however some results deviated from the averaged value noticeably for the void fraction of 70%. (author)

  19. OECD/NEA burnup credit criticality benchmarks phase IIIB. Burnup calculations of BWR fuel assemblies for storage and transport

    Energy Technology Data Exchange (ETDEWEB)

    Okuno, Hiroshi; Naito, Yoshitaka; Suyama, Kenya [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2002-02-01

    The report describes the final results of the Phase IIIB Benchmark conducted by the Expert Group on Burnup Credit Criticality Safety under the auspices of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD). The Benchmark was intended to compare the predictability of current computer code and data library combinations for the atomic number densities of an irradiated PWR fuel assembly model. The fuel assembly was irradiated under specific power of 25.6 MW/tHM up to 40 GWd/tHM and cooled for five years. The void fraction was assumed to be uniform throughout the channel box and constant, at 0, 40 and 70%, during burnup. In total, 16 results were submitted from 13 institutes of 7 countries. The calculated atomic number densities of 12 actinides and 20 fission product nuclides were found to be for the most part within a range of {+-}10% relative to the average, although some results, esp. {sup 155}Eu and gadolinium isotopes, exceeded the band, which will require further investigation. Pin-wise burnup results agreed well among the participants. The results in the infinite neutron multiplication factor k{sub {infinity}} also accorded well with each other for void fractions of 0 and 40%; however some results deviated from the averaged value noticeably for the void fraction of 70%. (author)

  20. HTR-Proteus Pebble Bed Experimental Program Cores 5,6,7,&8: Columnar Hexagonal Point-on-Point Packing with a 1:2 Moderator-to-Fuel Pebble Ratio

    Energy Technology Data Exchange (ETDEWEB)

    Bess, John D. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Sterbentz, James W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Snoj, Luka [Idaho National Lab. (INL), Idaho Falls, ID (United States); Lengar, Igor [Idaho National Lab. (INL), Idaho Falls, ID (United States); Koberl, Oliver [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen critical configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.

  1. HTR-PROTEUS PEBBLE BED EXPERIMENTAL PROGRAM CORES 5, 6, 7, & 8: COLUMNAR HEXAGONAL POINT-ON-POINT PACKING WITH A 1:2 MODERATOR-TO-FUEL PEBBLE RATIO

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2013-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen critical configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.

  2. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  3. Standard Guide for Benchmark Testing of Light Water Reactor Calculations

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...

  4. Solution High-Energy Burst Assembly (SHEBA) results from subprompt critical experiments with uranyl fluoride fuel

    International Nuclear Information System (INIS)

    Cappiello, C.C.; Butterfield, K.B.; Sanchez, R.G.

    1997-10-01

    The Solution High-Energy Burst Assembly (SHEBA) was originally constructed during 1980 and was designed to be a clean free-field geometry, right-circular, cylindrically symmetric critical assembly employing U(5%)O 2 F 2 solution as fuel. A second version of SHEBA, employing the same fuel but equipped with a fuel pump and shielding pit, was commissioned in 1993. This report includes data and operating experience for the 1993 SHEBA only. Solution-fueled benchmark work focused on the development of experimental measurements of the characterization of SHEBA; a summary of the results are given. A description of the system and the experimental results are given in some detail in the report. Experiments were designed to: (1) study the behavior of nuclear excursions in a low-enrichment solution, (2) evaluate accidental criticality alarm detectors for fuel-processing facilities, (3) provide radiation spectra and dose measurements to benchmark radiation transport calculations on a low-enrichment solution system similar to centrifuge enrichment plants, and (4) provide radiation fields to calibrate personnel dosimetry. 15 refs., 37 figs., 10 tabs

  5. Benchmark experiments at ASTRA facility on definition of space distribution of 235U fission reaction rate

    International Nuclear Information System (INIS)

    Bobrov, A. A.; Boyarinov, V. F.; Glushkov, A. E.; Glushkov, E. S.; Kompaniets, G. V.; Moroz, N. P.; Nevinitsa, V. A.; Nosov, V. I.; Smirnov, O. N.; Fomichenko, P. A.; Zimin, A. A.

    2012-01-01

    Results of critical experiments performed at five ASTRA facility configurations modeling the high-temperature helium-cooled graphite-moderated reactors are presented. Results of experiments on definition of space distribution of 235 U fission reaction rate performed at four from these five configurations are presented more detail. Analysis of available information showed that all experiments on criticality at these five configurations are acceptable for use them as critical benchmark experiments. All experiments on definition of space distribution of 235 U fission reaction rate are acceptable for use them as physical benchmark experiments. (authors)

  6. HEU benchmark calculations and LEU preliminary calculations for IRR-1

    International Nuclear Information System (INIS)

    Caner, M.; Shapira, M.; Bettan, M.; Nagler, A.; Gilat, J.

    2004-01-01

    We performed neutronics calculations for the Soreq Research Reactor, IRR-1. The calculations were done for the purpose of upgrading and benchmarking our codes and methods. The codes used were mainly WIMS-D/4 for cell calculations and the three dimensional diffusion code CITATION for full core calculations. The experimental flux was obtained by gold wire activation methods and compared with our calculated flux profile. The IRR-1 is loaded with highly enriched uranium fuel assemblies, of the plate type. In the framework of preparation for conversion to low enrichment fuel, additional calculations were done assuming the presence of LEU fresh fuel. In these preliminary calculations we investigated the effect on the criticality and flux distributions of the increase of U-238 loading, and the corresponding uranium density.(author)

  7. Plans and equipment for criticality measurements on plutonium-uranium nitrate solutions

    International Nuclear Information System (INIS)

    Lloyd, R.C.; Clayton, E.D.; Durst, B.M.

    1982-01-01

    Data from critical experiments are required on the criticality of plutonium-uranium nitrate solutions to accurately establish criticality control limits for use in processing and handling of breeder type fuels. Since the fuel must be processed both safely and economically, it is necessary that criticality considerations be based on accurate experimental data. Previous experiments have been reported on plutonium-uranium solutions with Pu weight ratios extending up to some 38 wt %. No data have been presented, however, for plutonium-uranium nitrate solutions beyond this Pu weight ratio. The current research emphasis is on the procurement of criticality data for plutonium-uranium mixtures up to 60 wt % Pu that will serve as the basis for handling criticality problems subsequently encountered in the development of technology for the breeder community. Such data also will provide necessary benchmarks for data testing and analysis on integral criticality experiments for verification of the analytical techniques used in support of criticality control. Experiments are currently being performed with plutonium-uranium nitrate solutions in stainless steel cylindrical vessels and an expandable slab tank system. A schematic of the experimental systems is presented

  8. Benchmark experiment on vanadium assembly with D-T neutrons. In-situ measurement

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio; Kasugai, Yoshimi; Konno, Chikara; Wada, Masayuki; Oyama, Yukio; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Murata, Isao; Kokooo; Takahashi, Akito

    1998-03-01

    Fusion neutronics benchmark experimental data on vanadium were obtained for neutrons in almost entire energies as well as secondary gamma-rays. Benchmark calculations for the experiment were performed to investigate validity of recent nuclear data files, i.e., JENDL Fusion File, FENDL/E-1.0 and EFF-3. (author)

  9. Core Benchmarks Descriptions

    International Nuclear Information System (INIS)

    Pavlovichev, A.M.

    2001-01-01

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented

  10. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  11. Validation of MCNP6.1 for Criticality Safety of Pu-Metal, -Solution, and -Oxide Systems

    Energy Technology Data Exchange (ETDEWEB)

    Kiedrowski, Brian C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Conlin, Jeremy Lloyd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kahler, III, Albert C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kersting, Alyssa R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, Donald K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Walker, Jessie L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-05-13

    Guidance is offered to the Los Alamos National Laboratory Nuclear Criticality Safety division towards developing an Upper Subcritical Limit (USL) for MCNP6.1 calculations with ENDF/B-VII.1 nuclear data for three classes of problems: Pu-metal, -solution, and -oxide systems. A benchmark suite containing 1,086 benchmarks is prepared, and a sensitivity/uncertainty (S/U) method with a generalized linear least squares (GLLS) data adjustment is used to reject outliers, bringing the total to 959 usable benchmarks. For each class of problem, S/U methods are used to select relevant experimental benchmarks, and the calculational margin is computed using extreme value theory. A portion of the margin of sub criticality is defined considering both a detection limit for errors in codes and data and uncertainty/variability in the nuclear data library. The latter employs S/U methods with a GLLS data adjustment to find representative nuclear data covariances constrained by integral experiments, which are then used to compute uncertainties in keff from nuclear data. The USLs for the classes of problems are as follows: Pu metal, 0.980; Pu solutions, 0.973; dry Pu oxides, 0.978; dilute Pu oxide-water mixes, 0.970; and intermediate-spectrum Pu oxide-water mixes, 0.953.

  12. Experimental verification of boundary conditions for numerical simulation of airflow in a benchmark ventilation channel

    Directory of Open Access Journals (Sweden)

    Lizal Frantisek

    2016-01-01

    Full Text Available Correct definition of boundary conditions is crucial for the appropriate simulation of a flow. It is a common practice that simulation of sufficiently long upstream entrance section is performed instead of experimental investigation of the actual conditions at the boundary of the examined area, in the case that the measurement is either impossible or extremely demanding. We focused on the case of a benchmark channel with ventilation outlet, which models a regular automotive ventilation system. At first, measurements of air velocity and turbulence intensity were performed at the boundary of the examined area, i.e. in the rectangular channel 272.5 mm upstream the ventilation outlet. Then, the experimentally acquired results were compared with results obtained by numerical simulation of further upstream entrance section defined according to generally approved theoretical suggestions. The comparison showed that despite the simple geometry and general agreement of average axial velocity, certain difference was found in the shape of the velocity profile. The difference was attributed to the simplifications of the numerical model and the isotropic turbulence assumption of the used turbulence model. The appropriate recommendations were stated for the future work.

  13. Direct Measurements of Quantum Kinetic Energy Tensor in Stable and Metastable Water near the Triple Point: An Experimental Benchmark.

    Science.gov (United States)

    Andreani, Carla; Romanelli, Giovanni; Senesi, Roberto

    2016-06-16

    This study presents the first direct and quantitative measurement of the nuclear momentum distribution anisotropy and the quantum kinetic energy tensor in stable and metastable (supercooled) water near its triple point, using deep inelastic neutron scattering (DINS). From the experimental spectra, accurate line shapes of the hydrogen momentum distributions are derived using an anisotropic Gaussian and a model-independent framework. The experimental results, benchmarked with those obtained for the solid phase, provide the state of the art directional values of the hydrogen mean kinetic energy in metastable water. The determinations of the direction kinetic energies in the supercooled phase, provide accurate and quantitative measurements of these dynamical observables in metastable and stable phases, that is, key insight in the physical mechanisms of the hydrogen quantum state in both disordered and polycrystalline systems. The remarkable findings of this study establish novel insight into further expand the capacity and accuracy of DINS investigations of the nuclear quantum effects in water and represent reference experimental values for theoretical investigations.

  14. Evaluated experimental database on critical heat flux in WWER FA models

    International Nuclear Information System (INIS)

    Artamonov, S.; Sergeev, V.; Volkov, S.

    2015-01-01

    The paper presents the description of the evaluated experimental database on critical heat flux in WWER FA models of new designs. This database was developed on the basis of the experimental data obtained in the years of 2009-2012. In the course of its development, the database was reviewed in terms of completeness of the information about the experiments and its compliance with the requirements of Rostekhnadzor regulatory documents. The description of the experimental FA model characteristics and experimental conditions was specified. Besides, the experimental data were statistically processed with the aim to reject incorrect ones and the sets of experimental data on critical heat fluxes (CHF) were compared for different FA models. As a result, for the fi rst time, the evaluated database on CHF in FA models of new designs was developed, that was complemented with analysis functions, and its main purpose is to be used in the process of development, verification and upgrading of calculation techniques. The developed database incorporates the data of 4183 experimental conditions obtained in 53 WWER FA models of various designs. Keywords: WWER reactor, fuel assembly, CHF, evaluated experimental data, database, statistical analysis. (author)

  15. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks, R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem

  16. Academic Productivity in Psychiatry: Benchmarks for the H-Index.

    Science.gov (United States)

    MacMaster, Frank P; Swansburg, Rose; Rittenbach, Katherine

    2017-08-01

    Bibliometrics play an increasingly critical role in the assessment of faculty for promotion and merit increases. Bibliometrics is the statistical analysis of publications, aimed at evaluating their impact. The objective of this study is to describe h-index and citation benchmarks in academic psychiatry. Faculty lists were acquired from online resources for all academic departments of psychiatry listed as having residency training programs in Canada (as of June 2016). Potential authors were then searched on Web of Science (Thomson Reuters) for their corresponding h-index and total number of citations. The sample included 1683 faculty members in academic psychiatry departments. Restricted to those with a rank of assistant, associate, or full professor resulted in 1601 faculty members (assistant = 911, associate = 387, full = 303). h-index and total citations differed significantly by academic rank. Both were highest in the full professor rank, followed by associate, then assistant. The range in each, however, was large. This study provides the initial benchmarks for the h-index and total citations in academic psychiatry. Regardless of any controversies or criticisms of bibliometrics, they are increasingly influencing promotion, merit increases, and grant support. As such, benchmarking by specialties is needed in order to provide needed context.

  17. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes (''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Cask,'' R.E. Glass, Sandia National Laboratories, 1985; ''Sample Problem Manual for Benchmarking of Cask Analysis Codes,'' R.E. Glass, Sandia National Laboratories, 1988; ''Standard Thermal Problem Set for the Evaluation of Heat Transfer Codes Used in the Assessment of Transportation Packages, R.E. Glass, et al., Sandia National Laboratories, 1988) used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in ''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks,'' R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem. 6 refs., 5 figs

  18. International Benchmark on Pressurised Water Reactor Sub-channel and Bundle Tests. Volume II: Benchmark Results of Phase I: Void Distribution

    International Nuclear Information System (INIS)

    Rubin, Adam; Avramova, Maria; Velazquez-Lozada, Alexander

    2016-03-01

    This report summarised the first phase of the Nuclear Energy Agency (NEA) and the US Nuclear Regulatory Commission Benchmark based on NUPEC PWR Sub-channel and Bundle Tests (PSBT), which was intended to provide data for the verification of void distribution models in participants' codes. This phase was composed of four exercises; Exercise 1: steady-state single sub-channel benchmark, Exercise 2: steady-state rod bundle benchmark, Exercise 3: transient rod bundle benchmark and Exercise 4: a pressure drop benchmark. The experimental data provided to the participants of this benchmark is from a series of void measurement tests using full-size mock-up tests for both Boiling Water Reactors (BWRs) and Pressurised Water Reactors (PWRs). These tests were performed from 1987 to 1995 by the Nuclear Power Engineering Corporation (NUPEC) in Japan and made available by the Japan Nuclear Energy Safety Organisation (JNES) for the purposes of this benchmark, which was organised by Pennsylvania State University. Twenty-one institutions from nine countries participated in this benchmark. Seventeen different computer codes were used in Exercises 1, 2, 3 and 4. Among the computer codes were porous media, sub-channel, systems thermal-hydraulic code and Computational Fluid Dynamics (CFD) codes. It was observed that the codes tended to overpredict the thermal equilibrium quality at lower elevations and under predict it at higher elevations. There was also a tendency to overpredict void fraction at lower elevations and underpredict it at high elevations for the bundle test cases. The overprediction of void fraction at low elevations is likely caused by the x-ray densitometer measurement method used. Under sub-cooled boiling conditions, the voids accumulate at heated surfaces (and are therefore not seen in the centre of the sub-channel, where the measurements are being taken), so the experimentally-determined void fractions will be lower than the actual void fraction. Some of the best

  19. International Benchmark based on Pressurised Water Reactor Sub-channel and Bundle Tests. Volume III: Departure from Nucleate Boiling

    International Nuclear Information System (INIS)

    Rubin, Adam; Avramova, Maria; Velazquez-Lozada, Alexander

    2016-03-01

    This report summarised the second phase of the Nuclear Energy Agency (NEA) and the Nuclear Regulatory Commission (NRC) Benchmark Based on NUPEC PWR Sub-channel and Bundle Tests (PSBT), which was intended to provide data for the verification of Departure from Nucleate Boiling (DNB) prediction in existing thermal-hydraulics codes and provide direction in the development of future methods. This phase was composed of three exercises; Exercise 1: fluid temperature benchmark, Exercise 2: steady-state rod bundle benchmark and Exercise 3: transient rod bundle benchmark. The experimental data provided to the participants of this benchmark is from a series of void measurement tests using full-size mock-up tests for both BWRs and PWRs. These tests were performed from 1987 to 1995 by the Nuclear Power Engineering Corporation (NUPEC) in Japan and made available by the Japan Nuclear Energy Safety Organisation (JNES) for the purposes of this benchmark, which was organised by Pennsylvania State University. Nine institutions from seven countries participated in this benchmark. Nine different computer codes were used in Exercise 1, 2 and 3. Among the computer codes were porous media, sub-channel and systems thermal-hydraulic code. The improvement between FLICA-OVAP (sub-channel) and FLICA (sub-channel) was noticeable. The main difference between the two was that FLICA-OVAP implicitly assigned flow regime based on drift flux, while FLICA assumes single phase flows. In Exercises 2 and 3, the codes were generally able to predict the Departure from Nucleate Boiling (DNB) power as well as the axial location of the onset of DNB (for the steady-state cases) and the time of DNB (for the transient cases). It was noted that the codes that used the Electric-Power-Research- Institute (EPRI) Critical-Heat-Flux (CHF) correlation had the lowest mean error in Exercise 2 for the predicted DNB power

  20. A large-scale benchmark of gene prioritization methods.

    Science.gov (United States)

    Guala, Dimitri; Sonnhammer, Erik L L

    2017-04-21

    In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.

  1. Consultancy Meeting on Preparation of the Final Technical Document of the IAEA CRP on Analytical and Experimental Benchmark Analysis of Accelerator Driven Systems

    International Nuclear Information System (INIS)

    2014-01-01

    With the objective to study the major physics phenomena of the spallation source and its coupling to a subcritical system, between 2005 and 2010 the IAEA carried out a Coordinated Research Project (CRP) called “Analytical and Experimental Benchmark Analyses of Accelerator Driven Systems (ADS)”. The CRP was contributed by 27 institutions from 18 Member States (Argentina, Belarus, Belgium, Brazil, China, France, Germany, Greece, Hungary, Italy, Japan, Netherlands, Poland, Russian Federation, Spain, Sweden, Ukraine and the USA), which performed a number of analytical and experimental benchmark activities. The main objective of the CRP was to develop, verify and validate calculation tools able to perform detailed ADS calculations, from the high energy proton beam to thermal neutron energies. The purpose of this meeting was to: - Collect and review all the available contributions produced by the CRP participants; - Define structure and content of the final TECDOC; - Assemble the first draft of the TECDOC; - Identify important missing parts; - Distribute tasks and responsibilities for drafting and editing the different sections and sub-sections of the TECDOC; - Agree on the time schedule for the TECDOC finalization, review and publication. The participants were requested to contribute to all the foreseen tasks

  2. Description and exploitation of benchmarks involving 149Sm, a fission product taking part of the burn up credit in spent fuels

    International Nuclear Information System (INIS)

    Anno, J.; Poullot, G.

    1995-01-01

    Up to now, there was no benchmark to validate the Fission Products (FPs) cross sections in criticality safety calculations. The protection and nuclear safety institute (IPSN) has begun an experimental program on 6 FPs ( 103 Rh, 133 Cs, 143 Nd, 149 Sm, 152 Sm, and 155 Gd daughter of 155 Eu) giving alone a decrease of reactivity equal to half the whole FPs in spent fuels (except Xe and I). Here are presented the experiments with the 149 Sm and the results obtained with the APOLLO I-MORET III calculations codes. 11 experiments are carried out in a zircaloy tank of 3.5 1 containing slightly nitric acid solutions of Samarium (96,9% in weight of 149S m) at 0.1048 -0.2148 - 0.6262 g/l concentrations. It was placed in the middle of arrays of UO 2 rods (4.742 % U5 weight %) at square pitch of 13 mm. The underwater height of the rods is the critical parameter. In addition, 7 experiments were performed with the same apparatus with water and boron proving a good experimental representativeness and a good accuracy of the calculations. As the reactivity worth of the Sm tank is between 2000 and 6000 10 -5 , the benchmarks are well representative and the cumulative absorption ratios show that 149 Sm is well qualified under 1 eV. (authors). 8 refs., 7 figs., 6 tabs

  3. Spectral measurements in critical assemblies: MCNP specifications and calculated results

    Energy Technology Data Exchange (ETDEWEB)

    Stephanie C. Frankle; Judith F. Briesmeister

    1999-12-01

    Recently, a suite of 86 criticality benchmarks for the Monte Carlo N-Particle (MCNP) transport code was developed, and the results of testing the ENDF/B-V and ENDF/B-VI data (through Release 2) were published. In addition to the standard k{sub eff} measurements, other experimental measurements were performed on a number of these benchmark assemblies. In particular, the Cross Section Evaluation Working Group (CSEWG) specifications contain experimental data for neutron leakage and central-flux measurements, central-fission ratio measurements, and activation ratio measurements. Additionally, there exists another set of fission reaction-rate measurements performed at the National Institute of Standards and Technology (NIST) utilizing a {sup 252}Cf source. This report will describe the leakage and central-flux measurements and show a comparison of experimental data to MCNP simulations performed using the ENDF/B-V and B-VI (Release 2) data libraries. Central-fission and activation reaction-rate measurements will be described, and the comparison of experimental data to MCNP simulations using available data libraries for each reaction of interest will be presented. Finally, the NIST fission reaction-rate measurements will be described. A comparison of MCNP results published previously with the current MCNP simulations will be presented for the NIST measurements, and a comparison of the current MCNP simulations to the experimental measurements will be presented.

  4. Spectral measurements in critical assemblies: MCNP specifications and calculated results

    International Nuclear Information System (INIS)

    Frankle, Stephanie C.; Briesmeister, Judith F.

    1999-01-01

    Recently, a suite of 86 criticality benchmarks for the Monte Carlo N-Particle (MCNP) transport code was developed, and the results of testing the ENDF/B-V and ENDF/B-VI data (through Release 2) were published. In addition to the standard k eff measurements, other experimental measurements were performed on a number of these benchmark assemblies. In particular, the Cross Section Evaluation Working Group (CSEWG) specifications contain experimental data for neutron leakage and central-flux measurements, central-fission ratio measurements, and activation ratio measurements. Additionally, there exists another set of fission reaction-rate measurements performed at the National Institute of Standards and Technology (NIST) utilizing a 252 Cf source. This report will describe the leakage and central-flux measurements and show a comparison of experimental data to MCNP simulations performed using the ENDF/B-V and B-VI (Release 2) data libraries. Central-fission and activation reaction-rate measurements will be described, and the comparison of experimental data to MCNP simulations using available data libraries for each reaction of interest will be presented. Finally, the NIST fission reaction-rate measurements will be described. A comparison of MCNP results published previously with the current MCNP simulations will be presented for the NIST measurements, and a comparison of the current MCNP simulations to the experimental measurements will be presented

  5. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  6. Critical experiments supporting close proximity water storage of power reactor fuel. Technical progress report

    International Nuclear Information System (INIS)

    Baldwin, M.N.; Hoovler, G.S.; Eng, R.L.; Welfare, F.G.

    1979-07-01

    Close-packed storage of LWR fuel assemblies is needed in order to expand the capacity of existing underwater storage pools. This increased capacity is required to accommodate the large volume of spent fuel produced by prolonged onsite storage. To provide benchmark criticality data in support of this effort, 20 critical assemblies were constructed that simulated a variety of close-packed LWR fuel storage configurations. Criticality calculations using the Monte Carlo KENO-IV code were performed to provide an analytical basis for comparison with the experimental data. Each critical configuration is documented in sufficient detail to permit the use of these data in validating calculational methods according to ANSI Standard N16.9-1975

  7. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  8. A computer code package for Monte Carlo photon-electron transport simulation Comparisons with experimental benchmarks

    International Nuclear Information System (INIS)

    Popescu, Lucretiu M.

    2000-01-01

    A computer code package (PTSIM) for particle transport Monte Carlo simulation was developed using object oriented techniques of design and programming. A flexible system for simulation of coupled photon, electron transport, facilitating development of efficient simulation applications, was obtained. For photons: Compton and photo-electric effects, pair production and Rayleigh interactions are simulated, while for electrons, a class II condensed history scheme was considered, in which catastrophic interactions (Moeller electron-electron interaction, bremsstrahlung, etc.) are treated in detail and all other interactions with reduced individual effect on electron history are grouped together using continuous slowing down approximation and energy straggling theories. Electron angular straggling is simulated using Moliere theory or a mixed model in which scatters at large angles are treated as distinct events. Comparisons with experimentally benchmarks for electron transmission and bremsstrahlung emissions energy and angular spectra, and for dose calculations are presented

  9. Plant improvements through the use of benchmarking analysis

    International Nuclear Information System (INIS)

    Messmer, J.R.

    1993-01-01

    As utilities approach the turn of the century, customer and shareholder satisfaction is threatened by rising costs. Environmental compliance expenditures, coupled with low load growth and aging plant assets are forcing utilities to operate existing resources in a more efficient and productive manner. PSI Energy set out in the spring of 1992 on a benchmarking mission to compare four major coal fired plants against others of similar size and makeup, with the goal of finding the best operations in the country. Following extensive analysis of the 'Best in Class' operation, detailed goals and objectives were established for each plant in seven critical areas. Three critical processes requiring rework were identified and required an integrated effort from all plants. The Plant Improvement process has already resulted in higher operation productivity, increased emphasis on planning, and lower costs due to effective material management. While every company seeks improvement, goals are often set in an ambiguous manner. Benchmarking aids in setting realistic goals based on others' actual accomplishments. This paper describes how the utility's short term goals will move them toward being a lower cost producer

  10. An experimental study on critical heat flux in a hemispherical narrow gap

    International Nuclear Information System (INIS)

    Park, R.J.; Lee, S.J.; Kang, K.H.; Kim, J.H.; Kim, S.B.; Kim, H.D.; Jeong, J.H.

    2000-01-01

    An experimental study of CHFG (Critical Heat Flux in Gap) has been performed to investigate the inherent cooling mechanism using distilled water and Freon R-113 in hemispherical narrow gaps. As a separate effect test of the CHFG test, a CCFL (Counter Current Flow Limit) test has been also performed to confirm the mechanism of the CHF in narrow annular gaps with large diameter. The CHFG test results have shown that an increase in the gap thickness leads to an increase in critical power. The pressure effect on the critical power was found to be much milder than predictions by CHF correlations of other studies. In the CCFL experiment, the occurrence of CCFL was correlated with the Wallis parameter, which was assumed to correspond to the critical power in the CHFG experiment. The measured values of critical power in the CHFG tests are much lower than CCFL experimental data and the predictions made by empirical CHF correlations. (author)

  11. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  12. Analyses and results of the OECD/NEA WPNCS EGUNF benchmark phase II. Technical report; Analysen und Ergebnisse zum OECD/NEA WPNCS EGUNF Benchmark Phase II. Technischer Bericht

    Energy Technology Data Exchange (ETDEWEB)

    Hannstein, Volker; Sommer, Fabian

    2017-05-15

    The report summarizes the performed studies and results in the frame of the phase II benchmarks of the expert group of used nuclear fuel (EGUNF) of the working party of nuclear criticality safety (WPNCS) of the nuclear energy agency (NEA) of the organization for economic co-operation and development (OECD). The studies specified within the benchmarks have been realized to the full extent. The scope of the benchmarks was the comparison of a generic BWR fuel element with gadolinium containing fuel rods with several computer codes and cross section libraries of different international working groups and institutions. The used computational model allows the evaluation of the accuracy of fuel rod and their influence of the inventory calculations and the respective influence on BWR burnout credit calculations.

  13. Gas cooled fast reactor benchmarks for JNC and Cea neutronic tools assessment

    International Nuclear Information System (INIS)

    Rimpault, G.; Sugino, K.; Hayashi, H.

    2005-01-01

    In order to verify the adequacy of JNC and Cea computational tools for the definition of GCFR (gas cooled fast reactor) core characteristics, GCFR neutronic benchmarks have been performed. The benchmarks have been carried out on two different cores: 1) a conventional Gas-Cooled fast Reactor (EGCR) core with pin-type fuel, and 2) an innovative He-cooled Coated-Particle Fuel (CPF) core. Core characteristics being studied include: -) Criticality (Effective multiplication factor or K-effective), -) Instantaneous breeding gain (BG), -) Core Doppler effect, and -) Coolant depressurization reactivity. K-effective and coolant depressurization reactivity at EOEC (End Of Equilibrium Cycle) state were calculated since these values are the most critical characteristics in the core design. In order to check the influence due to the difference of depletion calculation systems, a simple depletion calculation benchmark was performed. Values such as: -) burnup reactivity loss, -) mass balance of heavy metals and fission products (FP) were calculated. Results of the core design characteristics calculated by both JNC and Cea sides agree quite satisfactorily in terms of core conceptual design study. Potential features for improving the GCFR computational tools have been discovered during the course of this benchmark such as the way to calculate accurately the breeding gain. Different ways to improve the accuracy of the calculations have also been identified. In particular, investigation on nuclear data for steel is important for EGCR and for lumped fission products in both cores. The outcome of this benchmark is already satisfactory and will help to design more precisely GCFR cores. (authors)

  14. Benchmarking in the Netherlands

    International Nuclear Information System (INIS)

    1999-01-01

    In two articles an overview is given of the activities in the Dutch industry and energy sector with respect to benchmarking. In benchmarking operational processes of different competitive businesses are compared to improve your own performance. Benchmark covenants for energy efficiency between the Dutch government and industrial sectors contribute to a growth of the number of benchmark surveys in the energy intensive industry in the Netherlands. However, some doubt the effectiveness of the benchmark studies

  15. Experimental comparison of the critical ionization velocity in atomic and molecular gases

    International Nuclear Information System (INIS)

    Axnaes, I.

    1978-08-01

    The critical ionization velocity usub(c) of Ne, Kr, Xe, Cl 2 , O 2 , CO, CO 2 , NH 3 and H 2 O is investigated experimentally in a coaxial plasma gun. Together with experimental data obtained in earlier experiments the present results make it possible to make a systematic comparison between the critical ionization velocity for atomic and molecular gases. It is found that atomic and molecular gases tend to have values of critical ionization velocity which are respectively smaller and larger than the theoretical values. The current dependence of usub(c) is found to be different for atomic and molecular gases. A number of atomic and molecular processes relevant to the experiment are discussed

  16. Impact of the 235U Covariance Data in Benchmark Calculations

    International Nuclear Information System (INIS)

    Leal, Luiz C.; Mueller, D.; Arbanas, G.; Wiarda, D.; Derrien, H.

    2008-01-01

    The error estimation for calculated quantities relies on nuclear data uncertainty information available in the basic nuclear data libraries such as the U.S. Evaluated Nuclear Data File (ENDF/B). The uncertainty files (covariance matrices) in the ENDF/B library are generally obtained from analysis of experimental data. In the resonance region, the computer code SAMMY is used for analyses of experimental data and generation of resonance parameters. In addition to resonance parameters evaluation, SAMMY also generates resonance parameter covariance matrices (RPCM). SAMMY uses the generalized least-squares formalism (Bayes method) together with the resonance formalism (R-matrix theory) for analysis of experimental data. Two approaches are available for creation of resonance-parameter covariance data. (1) During the data-evaluation process, SAMMY generates both a set of resonance parameters that fit the experimental data and the associated resonance-parameter covariance matrix. (2) For existing resonance-parameter evaluations for which no resonance-parameter covariance data are available, SAMMY can retroactively create a resonance-parameter covariance matrix. The retroactive method was used to generate covariance data for 235U. The resulting 235U covariance matrix was then used as input to the PUFF-IV code, which processed the covariance data into multigroup form, and to the TSUNAMI code, which calculated the uncertainty in the multiplication factor due to uncertainty in the experimental cross sections. The objective of this work is to demonstrate the use of the 235U covariance data in calculations of critical benchmark systems

  17. Impact of the 235U covariance data in benchmark calculations

    International Nuclear Information System (INIS)

    Leal, Luiz; Mueller, Don; Arbanas, Goran; Wiarda, Dorothea; Derrien, Herve

    2008-01-01

    The error estimation for calculated quantities relies on nuclear data uncertainty information available in the basic nuclear data libraries such as the U.S. Evaluated Nuclear Data File (ENDF/B). The uncertainty files (covariance matrices) in the ENDF/B library are generally obtained from analysis of experimental data. In the resonance region, the computer code SAMMY is used for analyses of experimental data and generation of resonance parameters. In addition to resonance parameters evaluation, SAMMY also generates resonance parameter covariance matrices (RPCM). SAMMY uses the generalized least-squares formalism (Bayes' method) together with the resonance formalism (R-matrix theory) for analysis of experimental data. Two approaches are available for creation of resonance-parameter covariance data. (1) During the data-evaluation process, SAMMY generates both a set of resonance parameters that fit the experimental data and the associated resonance-parameter covariance matrix. (2) For existing resonance-parameter evaluations for which no resonance-parameter covariance data are available, SAMMY can retroactively create a resonance-parameter covariance matrix. The retroactive method was used to generate covariance data for 235 U. The resulting 235 U covariance matrix was then used as input to the PUFF-IV code, which processed the covariance data into multigroup form, and to the TSUNAMI code, which calculated the uncertainty in the multiplication factor due to uncertainty in the experimental cross sections. The objective of this work is to demonstrate the use of the 235 U covariance data in calculations of critical benchmark systems. (authors)

  18. Methodological and Epistemological Criticism on Experimental Accounting Research Published in Brazil

    Directory of Open Access Journals (Sweden)

    Paulo Frederico Homero Junior

    2016-06-01

    Full Text Available In this article, I analyze 17 experimental studies published in Brazilian accounting journals between 2006 and 2015, in order to develop both critical and methodological criticism on these articles. First, we discuss the methodological characteristics of the experiments and the main validity threats they face, analyzing how the selected articles deal with these threats. Overall, this analysis shows a lack of consideration of the validity of the constructs used, difficulty to develop internally valid experiments and inability to express confidence in the applicability of the results to contexts other than the experimental. Then, I compare the positivist theoretical perspective these articles have in common with constructionist conceptions of the social sciences and criticize them, based on these notions. I maintain that these articles are characterized by a behaviorist approach, a reified notion of subjectivity, disregard of the cultural and historical specificities and axiological commitment to submission, instead of the emancipation of the people in relation to management control. The paper contributes to the Brazilian accounting literature in two ways: raising awareness on the challenges faced in conducting appropriate experimental designs and showing how the experimental accounting research can be problematic from an epistemological point of view, aiming to promote an interparadigmatic debate to arouse greater awareness on the subject and more robust consideration of such issues by future researchers.

  19. BENCHMARKING ORTEC ISOTOPIC MEASUREMENTS AND CALCULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Dewberry, R; Raymond Sigg, R; Vito Casella, V; Nitin Bhatt, N

    2008-09-29

    these cases the ISOTOPIC analysis program is especially valuable because it allows a rapid, defensible, reproducible analysis of radioactive content without tedious and repetitive experimental measurement of {gamma}-ray transmission through the sample and container at multiple photon energies. The ISOTOPIC analysis technique is also especially valuable in facility holdup measurements where the acquisition configuration does not fit the accepted generalized geometries where detector efficiencies have been solved exactly with good calculus. Generally in facility passive {gamma}-ray holdup measurements the acquisition geometry is only approximately reproducible, and the sample (object) is an extensive glovebox or HEPA filter component. In these cases accuracy of analyses is rarely possible, however demonstrating fissile Pu and U content within criticality safety guidelines yields valuable operating information. Demonstrating such content can be performed with broad assumptions and within broad factors (e.g. 2-8) of conservatism. The ISOTOPIC analysis program yields rapid defensible analyses of content within acceptable uncertainty and within acceptable conservatism without extensive repetitive experimental measurements. In addition to transmission correction determinations based on the mass and composition of objects, the ISOTOPIC program performs finite geometry corrections based on object shape and dimensions. These geometry corrections are based upon finite element summation to approximate exact closed form calculus. In this report we provide several benchmark comparisons to the same technique provided by the Canberra In Situ Object Counting System (ISOCS) and to the finite thickness calculations described by Russo in reference 10. This report describes the benchmark comparisons we have performed to demonstrate and to document that the ISOTOPIC analysis program yields the results we claim to our customers.

  20. WLUP benchmarks

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2002-01-01

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  1. Operational benchmark for VVER-1000, unit 6, Kozloduy NPP

    International Nuclear Information System (INIS)

    Apostolov, T.; Petrov, B.

    1999-01-01

    Benchmark calculations have been carried out using the 3D nodal code TRAPEZ. Global neutron-physics characteristics of the VVER-1000 core, Kozloduy NPP Unit 6, have been determined taking into account the real loading patterns and operational history of the first three cycles. The code TRLOAD has been used to perform the fuel reloading between any two cycles. The reactor and components descriptions as well as material compositions are given. The results presented include the critical boric acid concentration, the radial power distribution, the axial power distribution for the maximum overload assembly, and the burnup distribution at three different moments during each cycle. Calculated values have been compared with measured data. It is shown that the results obtained by the TRAPEZ code are in good agreement with the experimental data. The information presented could serve as a test case for validation of code packages designed for analyzing the steady-state operation of VVERs. (author)

  2. Validation of new evaluations for the main fuel nuclides using the ICSBEP handbook benchmarks

    International Nuclear Information System (INIS)

    Koscheev, V.; Manturov, G.; Rozhikhin, Y.; Tsibulya, A.

    2008-01-01

    The newest evaluations, adopted for Endf/B-VII.0, JEFF-3.1, JENDL-3.3 and the Russian library RUSFOND nuclear data files, for the most important fissile isotopes 235 U, 238 U, and 239 Pu are compared between each other and tested through a set of integral experiments, among them removal cross section under fission threshold of 238 U, critical infinite media Scherzo-556, and ICSBEP Handbook criticality safety benchmarks. Globally our benchmarking shows that these evaluations are in many cases very close. However, essential differences are observed through the analysis of critical systems with big enough content of 238 U. Large diversity still exists in inelastic scattering cross sections. We have to note that the divergence in the 238 U capture cross-section that existed in previous evaluations, has practically disappeared

  3. Benchmarking the cad-based attila discrete ordinates code with experimental data of fusion experiments and to the results of MCNP code in simulating ITER

    International Nuclear Information System (INIS)

    Youssef, M. Z.

    2007-01-01

    Attila is a newly developed finite element code based on Sn neutron, gamma, and charged particle transport in 3-D geometry in which unstructured tetrahedral meshes are generated to describe complex geometry that is based on CAD input (Solid Works, Pro/Engineer, etc). In the present work we benchmark its calculation accuracy by comparing its prediction to the measured data inside two experimental mock-ups bombarded with 14 MeV neutrons. The results are also compared to those based on MCNP calculations. The experimental mock-ups simulate parts of the International Thermonuclear Experimental Reactor (ITER) in-vessel components, namely: (1) the Tungsten mockup configuration (54.3 cm x 46.8 cm x 45 cm), and (2) the ITER shielding blanket followed by the SCM region (simulated by alternating layers of SS316 and copper). In the latter configuration, a high aspect ratio rectangular streaming channel was introduced (to simulate steaming paths between ITER blanket modules) which ends with a rectangular cavity. The experiments on these two fusion-oriented integral experiments were performed at the Fusion Neutron Generator (FNG) facility, Frascati, Italy. In addition, the nuclear performance of the ITER MCNP 'Benchmark' CAD model has been performed with Attila to compare its results to those obtained with CAD-based MCNP approach developed by several ITER participants. The objective of this paper is to compare results based on two distinctive 3-D calculation tools using the same nuclear data, FENDL2.1, and the same response functions of several reaction rates measured in ITER mock-ups and to enhance confidence from the international neutronics community in the Attila code and how it can precisely quantify the nuclear field in large and complex systems, such as ITER. Attila has the advantage of providing a full flux mapping visualization everywhere in one run where components subjected to excessive radiation level and strong streaming paths can be identified. In addition, the

  4. Review of microscopic integral cross section data in fundamental reactor dosimetry benchmark neutron fields

    International Nuclear Information System (INIS)

    Fabry, A.; McElroy, W.N.; Kellogg, L.S.; Lippincott, E.P.; Grundl, J.A.; Gilliam, D.M.; Hansen, G.E.

    1976-01-01

    This paper is intended to review and critically discuss microscopic integral cross section measurement and calculation data for fundamental reactor dosimetry benchmark neutron fields. Specifically the review covers the following fundamental benchmarks: the spontaneous californium-252 fission neutron spectrum standard field; the thermal-neutron induced uranium-235 fission neutron spectrum standard field; the (secondary) intermediate-energy standard neutron field at the center of the Mol-ΣΣ, NISUS, and ITN-ΣΣ facilities; the reference neutron field at the center of the Coupled Fast Reactor Measurement Facility; the reference neutron field at the center of the 10% enriched uranium metal, cylindrical, fast critical; the (primary) Intermediate-Energy Standard Neutron Field

  5. Critical experiments with 4.31 wt % 235U-enriched UO2 rods in highly borated water lattices

    International Nuclear Information System (INIS)

    Durst, B.M.; Bierman, S.R.; Clayton, E.D.

    1982-08-01

    A series of critical experiments were performed with 4.31 wt % 235 U enriched UO 2 fuel rods immersed in water containing various concentrations of boron ranging up to 2.55 g/l. The boron was added in the form of boric acid (H 3 BO 3 ). Critical experimental data were obtained for two different lattice pitches wherein the water-to-uranium oxide volume ratios were 1.59 and 1.09. The experiments provide benchmarks on heavily borated systems for use in validating calculational techniques employed in analyzing fuel shipping casks and spent fuel storage systems that may utilize boron for criticality control

  6. Benchmarking electricity distribution

    Energy Technology Data Exchange (ETDEWEB)

    Watts, K. [Department of Justice and Attorney-General, QLD (Australia)

    1995-12-31

    Benchmarking has been described as a method of continuous improvement that involves an ongoing and systematic evaluation and incorporation of external products, services and processes recognised as representing best practice. It is a management tool similar to total quality management (TQM) and business process re-engineering (BPR), and is best used as part of a total package. This paper discusses benchmarking models and approaches and suggests a few key performance indicators that could be applied to benchmarking electricity distribution utilities. Some recent benchmarking studies are used as examples and briefly discussed. It is concluded that benchmarking is a strong tool to be added to the range of techniques that can be used by electricity distribution utilities and other organizations in search of continuous improvement, and that there is now a high level of interest in Australia. Benchmarking represents an opportunity for organizations to approach learning from others in a disciplined and highly productive way, which will complement the other micro-economic reforms being implemented in Australia. (author). 26 refs.

  7. Higgs Pair Production: Choosing Benchmarks With Cluster Analysis

    CERN Document Server

    Carvalho, Alexandra; Dorigo, Tommaso; Goertz, Florian; Gottardo, Carlo A.; Tosi, Mia

    2016-01-01

    New physics theories often depend on a large number of free parameters. The precise values of those parameters in some cases drastically affect the resulting phenomenology of fundamental physics processes, while in others finite variations can leave it basically invariant at the level of detail experimentally accessible. When designing a strategy for the analysis of experimental data in the search for a signal predicted by a new physics model, it appears advantageous to categorize the parameter space describing the model according to the corresponding kinematical features of the final state. A multi-dimensional test statistic can be used to gauge the degree of similarity in the kinematics of different models; a clustering algorithm using that metric may then allow the division of the space into homogeneous regions, each of which can be successfully represented by a benchmark point. Searches targeting those benchmark points are then guaranteed to be sensitive to a large area of the parameter space. In this doc...

  8. Use of Sensitivity and Uncertainty Analysis to Select Benchmark Experiments for the Validation of Computer Codes and Data

    International Nuclear Information System (INIS)

    Elam, K.R.; Rearden, B.T.

    2003-01-01

    Sensitivity and uncertainty analysis methodologies under development at Oak Ridge National Laboratory were applied to determine whether existing benchmark experiments adequately cover the area of applicability for the criticality code and data validation of PuO 2 and mixed-oxide (MOX) powder systems. The study examined three PuO 2 powder systems and four MOX powder systems that would be useful for establishing mass limits for a MOX fuel fabrication facility. Using traditional methods to choose experiments for criticality analysis validation, 46 benchmark critical experiments were identified as applicable to the PuO 2 powder systems. However, only 14 experiments were thought to be within the area of applicability for dry MOX powder systems.The applicability of 318 benchmark critical experiments, including the 60 experiments initially identified, was assessed. Each benchmark and powder system was analyzed using the Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) one-dimensional (TSUNAMI-1D) or TSUNAMI three-dimensional (TSUNAMI-3D) sensitivity analysis sequences, which will be included in the next release of the SCALE code system. This sensitivity data and cross-section uncertainty data were then processed with TSUNAMI-IP to determine the correlation of each application to each experiment in the benchmarking set. Correlation coefficients are used to assess the similarity between systems and determine the applicability of one system for the code and data validation of another.The applicability of most of the experiments identified using traditional methods was confirmed by the TSUNAMI analysis. In addition, some PuO 2 and MOX powder systems were determined to be within the area of applicability of several other benchmarks that would not have been considered using traditional methods. Therefore, the number of benchmark experiments useful for the validation of these systems exceeds the number previously expected. The TSUNAMI analysis

  9. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  10. Qinshan CANDU NPP outage performance improvement through benchmarking

    International Nuclear Information System (INIS)

    Jiang Fuming

    2005-01-01

    With the increasingly fierce competition in the deregulated Energy Market, the optimization of outage duration has become one of the focal points for the Nuclear Power Plant owners around the world. People are seeking various ways to shorten the outage duration of NPP. Great efforts have been made in the Light Water Reactor (LWR) family with the concept of benchmarking and evaluation, which great reduced the outage duration and improved outage performance. The average capacity factor of LWRs has been greatly improved over the last three decades, which now is close to 90%. CANDU (Pressurized Heavy Water Reactor) stations, with its unique feature of on power refueling, of nuclear fuel remaining in the reactor all through the planned outage, have given raise to more stringent safety requirements during planned outage. In addition, the above feature gives more variations to the critical path of planned outage in different station. In order to benchmarking again the best practices in the CANDU stations, Third Qinshan Nuclear Power Company (TQNPC) have initiated the benchmarking program among the CANDU stations aiming to standardize the outage maintenance windows and optimize the outage duration. The initial benchmarking has resulted the optimization of outage duration in Qinshan CANDU NPP and the formulation of its first long-term outage plan. This paper describes the benchmarking works that have been proven to be useful for optimizing outage duration in Qinshan CANDU NPP, and the vision of further optimize the duration with joint effort from the CANDU community. (authors)

  11. RUNE benchmarks

    DEFF Research Database (Denmark)

    Peña, Alfredo

    This report contains the description of a number of benchmarks with the purpose of evaluating flow models for near-shore wind resource estimation. The benchmarks are designed based on the comprehensive database of observations that the RUNE coastal experiment established from onshore lidar...

  12. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  13. Benchmarking of venous thromboembolism prophylaxis practice with ENT.UK guidelines.

    Science.gov (United States)

    Al-Qahtani, Ali S

    2017-05-01

    The aim of this study was to benchmark our guidelines of prevention of venous thromboembolism (VTE) in ENT surgical population against ENT.UK guidelines, and also to encourage healthcare providers to utilize benchmarking as an effective method of improving performance. The study design is prospective descriptive analysis. The setting of this study is tertiary referral centre (Assir Central Hospital, Abha, Saudi Arabia). In this study, we are benchmarking our practice guidelines of the prevention of VTE in the ENT surgical population against that of ENT.UK guidelines to mitigate any gaps. ENT guidelines 2010 were downloaded from the ENT.UK Website. Our guidelines were compared with the possibilities that either our performance meets or fall short of ENT.UK guidelines. Immediate corrective actions will take place if there is quality chasm between the two guidelines. ENT.UK guidelines are evidence-based and updated which may serve as role-model for adoption and benchmarking. Our guidelines were accordingly amended to contain all factors required in providing a quality service to ENT surgical patients. While not given appropriate attention, benchmarking is a useful tool in improving quality of health care. It allows learning from others' practices and experiences, and works towards closing any quality gaps. In addition, benchmarking clinical outcomes is critical for quality improvement and informing decisions concerning service provision. It is recommended to be included on the list of quality improvement methods of healthcare services.

  14. Benchmarking comparison and validation of MCNP photon interaction data

    Directory of Open Access Journals (Sweden)

    Colling Bethany

    2017-01-01

    Full Text Available The objective of the research was to test available photoatomic data libraries for fusion relevant applications, comparing against experimental and computational neutronics benchmarks. Photon flux and heating was compared using the photon interaction data libraries (mcplib 04p, 05t, 84p and 12p. Suitable benchmark experiments (iron and water were selected from the SINBAD database and analysed to compare experimental values with MCNP calculations using mcplib 04p, 84p and 12p. In both the computational and experimental comparisons, the majority of results with the 04p, 84p and 12p photon data libraries were within 1σ of the mean MCNP statistical uncertainty. Larger differences were observed when comparing computational results with the 05t test photon library. The Doppler broadening sampling bug in MCNP-5 is shown to be corrected for fusion relevant problems through use of the 84p photon data library. The recommended libraries for fusion neutronics are 84p (or 04p with MCNP6 and 84p if using MCNP-5.

  15. Benchmarking comparison and validation of MCNP photon interaction data

    Science.gov (United States)

    Colling, Bethany; Kodeli, I.; Lilley, S.; Packer, L. W.

    2017-09-01

    The objective of the research was to test available photoatomic data libraries for fusion relevant applications, comparing against experimental and computational neutronics benchmarks. Photon flux and heating was compared using the photon interaction data libraries (mcplib 04p, 05t, 84p and 12p). Suitable benchmark experiments (iron and water) were selected from the SINBAD database and analysed to compare experimental values with MCNP calculations using mcplib 04p, 84p and 12p. In both the computational and experimental comparisons, the majority of results with the 04p, 84p and 12p photon data libraries were within 1σ of the mean MCNP statistical uncertainty. Larger differences were observed when comparing computational results with the 05t test photon library. The Doppler broadening sampling bug in MCNP-5 is shown to be corrected for fusion relevant problems through use of the 84p photon data library. The recommended libraries for fusion neutronics are 84p (or 04p) with MCNP6 and 84p if using MCNP-5.

  16. Benchmarking of the FENDL-3 Neutron Cross-section Data Starter Library for Fusion Applications

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, U., E-mail: ulrich.fischer@kit.edu [Association KIT-Euratom, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Angelone, M. [Associazione ENEA-Euratom, ENEA Fusion Division, Via E. Fermi 27, I-00044 Frascati (Italy); Bohm, T. [University of Wisconsin-Madison, 1500 Engineering Dr, Madison, WI 53706 (United States); Kondo, K. [Association KIT-Euratom, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Konno, C. [Japan Atomic Energy Agency, Tokai-mura, Naka-gun, Ibaraki-ken 319-1195 (Japan); Sawan, M. [University of Wisconsin-Madison, 1500 Engineering Dr, Madison, WI 53706 (United States); Villari, R. [Associazione ENEA-Euratom, ENEA Fusion Division, Via E. Fermi 27, I-00044 Frascati (Italy); Walker, B. [University of Wisconsin-Madison, 1500 Engineering Dr, Madison, WI 53706 (United States)

    2014-06-15

    This paper summarizes the benchmark analyses performed in a joint effort of ENEA (Italy), JAEA (Japan), KIT (Germany), and the University of Wisconsin (USA) on a computational ITER benchmark and a series of 14 MeV neutron benchmark experiments. The computational benchmark revealed a modest increase of the neutron flux levels in the deep penetration regions and a substantial increase of the gas production in steel components. The comparison to experimental results showed good agreement with no substantial differences between FENDL-3.0 and FENDL-2.1 for most of the responses. In general, FENDL-3 shows an improved performance for fusion neutronics applications.

  17. Development of an ICSBEP Benchmark Evaluation, Nearly 20 Years of Experience

    International Nuclear Information System (INIS)

    Briggs, J. Blair; Bess, John D.

    2011-01-01

    The basic structure of all ICSBEP benchmark evaluations is essentially the same and includes (1) a detailed description of the experiment; (2) an evaluation of the experiment, including an exhaustive effort to quantify the effects of uncertainties on measured quantities; (3) a concise presentation of benchmark-model specifications; (4) sample calculation results; and (5) a summary of experimental references. Computer code input listings and other relevant information are generally preserved in appendixes. Details of an ICSBEP evaluation is presented.

  18. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  19. Pre-evaluation of fusion shielding benchmark experiment

    International Nuclear Information System (INIS)

    Hayashi, K.; Handa, H.; Konno, C.

    1994-01-01

    Shielding benchmark experiment is very useful to test the design code and nuclear data for fusion devices. There are many types of benchmark experiments that should be done in fusion shielding problems, but time and budget are limited. Therefore it will be important to select and determine the effective experimental configurations by precalculation before the experiment. The authors did three types of pre-evaluation to determine the experimental assembly configurations of shielding benchmark experiments planned in FNS, JAERI. (1) Void Effect Experiment - The purpose of this experiment is to measure the local increase of dose and nuclear heating behind small void(s) in shield material. Dimension of the voids and its arrangements were decided as follows. Dose and nuclear heating were calculated both for with and without void(s). Minimum size of the void was determined so that the ratio of these two results may be larger than error of the measurement system. (2) Auxiliary Shield Experiment - The purpose of this experiment is to measure shielding properties of B 4 C, Pb, W, and dose around superconducting magnet (SCM). Thickness of B 4 C, Pb, W and their arrangement including multilayer configuration were determined. (3) SCM Nuclear Heating Experiment - The purpose of this experiment is to measure nuclear heating and dose distribution in SCM material. Because it is difficult to use liquid helium as a part of SCM mock up material, material composition of SCM mock up are surveyed to have similar nuclear heating property of real SCM composition

  20. Static benchmarking of the NESTLE advanced nodal code

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1997-01-01

    Results from the NESTLE advanced nodal code are presented for multidimensional numerical benchmarks representing four different types of reactors, and predictions from NESTLE are compared with measured data from pressurized water reactors (PWRs). The numerical benchmarks include cases representative of PWRs, boiling water reactors (BWRs), CANDU heavy water reactors (HWRs), and high-temperature gas-cooled reactors (HTGRs). The measured PWR data include critical soluble boron concentrations and isothermal temperature coefficients of reactivity. The results demonstrate that NESTLE correctly solves the multigroup diffusion equations for both Cartesian and hexagonal geometries, that it reliably calculates k eff and reactivity coefficients for PWRs, and that--subsequent to the incorporation of additional thermal-hydraulic models--it will be able to perform accurate calculations for the corresponding parameters in BWRs, HWRs, and HTGRs as well

  1. Benchmarking school nursing practice: the North West Regional Benchmarking Group

    OpenAIRE

    Littler, Nadine; Mullen, Margaret; Beckett, Helen; Freshney, Alice; Pinder, Lynn

    2016-01-01

    It is essential that the quality of care is reviewed regularly through robust processes such as benchmarking to ensure all outcomes and resources are evidence-based so that children and young people’s needs are met effectively. This article provides an example of the use of benchmarking in school nursing practice. Benchmarking has been defined as a process for finding, adapting and applying best practices (Camp, 1994). This concept was first adopted in the 1970s ‘from industry where it was us...

  2. Reactor physics tests and benchmark analyses of STACY

    International Nuclear Information System (INIS)

    Miyoshi, Yoshinori; Umano, Takuya

    1996-01-01

    The Static Experiment Critical Facility, STACY in the Nuclear Fuel Cycle Safety Engineering Research Facility, NUCEF is a solution type critical facility to accumulate fundamental criticality data on uranyl nitrate solution, plutonium nitrate solution and their mixture. A series of critical experiments have been performed for 10 wt% enriched uranyl nitrate solution using a cylindrical core tank. In these experiments, systematic data of the critical height, differential reactivity of the fuel solution, kinetic parameter and reactor power were measured with changing the uranium concentration of the fuel solution from 313 gU/l to 225 gU/l. Critical data through the first series of experiments for the basic core are reported in this paper for evaluating the accuracy of the criticality safety calculation codes. Benchmark calculations of the neutron multiplication factor k eff for the critical condition were made using a neutron transport code TWOTRAN in the SRAC system and a continuous energy Monte Carlo code MCNP 4A with a Japanese evaluated nuclear data library, JENDL 3.2. (J.P.N.)

  3. Review of microscopic integral cross section data in fundamental reactor dosimetry benchmark neutron fields

    International Nuclear Information System (INIS)

    Fabry, A.; McElroy, W.N.; Kellogg, L.S.; Lippincott, E.P.; Grundl, J.A.; Gilliam, D.M.; Hansen, G.E.

    1976-10-01

    The paper is intended to review and critically discuss microscopic integral cross section measurement and calculation data for fundamental reactor dosimetry benchmark neutron fields. Specifically the review covers the following fundamental benchmarks: (1) the spontaneous californium-252 fission neutron spectrum standard field; (2) the thermal-neutron induced uranium-235 fission neutron spectrum standard field; (3) the (secondary) intermediate-energy standard neutron field at the center of the Mol-ΣΣ, NISUS, and ITN--ΣΣ facilities; (4) the reference neutron field at the center of the Coupled Fast Reactor Measurement Facility (CFRMF); (5) the reference neutron field at the center of the 10 percent enriched uranium metal, cylindrical, fast critical; and (6) the (primary) Intermediate-Energy Standard Neutron Field

  4. Two-dimensional benchmark calculations for PNL-30 through PNL-35

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1997-01-01

    Interest in critical experiments with lattices of mixed-oxide (MOX) fuel pins has been revived by the possibility that light water reactors will be used for disposition of weapons-grade plutonium. A series of six experiments with MOX lattices, designated PNL-30 through PNL-35, was performed at Pacific Northwest Laboratories in 1975 and 1976, and a set of benchmark specifications for these experiments subsequently was adopted by the Cross Section Evaluation Working Group (CSEWG). Although there appear to be some problems with these experiments, they remain the only CSEWG benchmarks for MOX lattices. The number of fuel pins in these experiments is relatively low, corresponding to fewer than 4 typical pressurized-water-reactor fuel assemblies. Accordingly, they are more appropriate as benchmarks for lattice-physics codes than for reactor-core simulator codes. Unfortunately, the CSEWG specifications retain the full three-dimensional (3D) detail of the experiments, while lattice-physics codes almost universally are limited to two dimensions (2D). This paper proposes an extension of the benchmark specifications to include a 2D model, and it justifies that extension by comparing results from the MCNP Monte Carlo code for the 2D and 3D specifications

  5. Computational Benchmark for Estimation of Reactivity Margin from Fission Products and Minor Actinides in PWR Burnup Credit

    International Nuclear Information System (INIS)

    Wagner, J.C.

    2001-01-01

    This report proposes and documents a computational benchmark problem for the estimation of the additional reactivity margin available in spent nuclear fuel (SNF) from fission products and minor actinides in a burnup-credit storage/transport environment, relative to SNF compositions containing only the major actinides. The benchmark problem/configuration is a generic burnup credit cask designed to hold 32 pressurized water reactor (PWR) assemblies. The purpose of this computational benchmark is to provide a reference configuration for the estimation of the additional reactivity margin, which is encouraged in the U.S. Nuclear Regulatory Commission (NRC) guidance for partial burnup credit (ISG8), and document reference estimations of the additional reactivity margin as a function of initial enrichment, burnup, and cooling time. Consequently, the geometry and material specifications are provided in sufficient detail to enable independent evaluations. Estimates of additional reactivity margin for this reference configuration may be compared to those of similar burnup-credit casks to provide an indication of the validity of design-specific estimates of fission-product margin. The reference solutions were generated with the SAS2H-depletion and CSAS25-criticality sequences of the SCALE 4.4a package. Although the SAS2H and CSAS25 sequences have been extensively validated elsewhere, the reference solutions are not directly or indirectly based on experimental results. Consequently, this computational benchmark cannot be used to satisfy the ANS 8.1 requirements for validation of calculational methods and is not intended to be used to establish biases for burnup credit analyses

  6. Utilizing benchmark data from the ANL-ZPR diagnostic cores program

    International Nuclear Information System (INIS)

    Schaefer, R. W.; McKnight, R. D.

    2000-01-01

    The support of the criticality safety community is allowing the production of benchmark descriptions of several assemblies from the ZPR Diagnostic Cores Program. The assemblies have high sensitivities to nuclear data for a few isotopes. This can highlight limitations in nuclear data for selected nuclides or in standard methods used to treat these data. The present work extends the use of the simplified model of the U9 benchmark assembly beyond the validation of k eff . Further simplifications have been made to produce a data testing benchmark in the style of the standard CSEWG benchmark specifications. Calculations for this data testing benchmark are compared to results obtained with more detailed models and methods to determine their biases. These biases or corrections factors can then be applied in the use of the less refined methods and models. Data testing results using Versions IV, V, and VI of the ENDF/B nuclear data are presented for k eff , f 28 /f 25 , c 28 /f 25 , and β eff . These limited results demonstrate the importance of studying other integral parameters in addition to k eff in trying to improve nuclear data and methods and the importance of accounting for methods and/or modeling biases when using data testing results to infer the quality of the nuclear data files

  7. Design and Implementation of a Web-Based Reporting and Benchmarking Center for Inpatient Glucometrics

    Science.gov (United States)

    Schnipper, Jeffrey Lawrence; Messler, Jordan; Ramos, Pedro; Kulasa, Kristen; Nolan, Ann; Rogers, Kendall

    2014-01-01

    Background: Insulin is a top source of adverse drug events in the hospital, and glycemic control is a focus of improvement efforts across the country. Yet, the majority of hospitals have no data to gauge their performance on glycemic control, hypoglycemia rates, or hypoglycemic management. Current tools to outsource glucometrics reports are limited in availability or function. Methods: Society of Hospital Medicine (SHM) faculty designed and implemented a web-based data and reporting center that calculates glucometrics on blood glucose data files securely uploaded by users. Unit labels, care type (critical care, non–critical care), and unit type (eg, medical, surgical, mixed, pediatrics) are defined on upload allowing for robust, flexible reporting. Reports for any date range, care type, unit type, or any combination of units are available on demand for review or downloading into a variety of file formats. Four reports with supporting graphics depict glycemic control, hypoglycemia, and hypoglycemia management by patient day or patient stay. Benchmarking and performance ranking reports are generated periodically for all hospitals in the database. Results: In all, 76 hospitals have uploaded at least 12 months of data for non–critical care areas and 67 sites have uploaded critical care data. Critical care benchmarking reveals wide variability in performance. Some hospitals achieve top quartile performance in both glycemic control and hypoglycemia parameters. Conclusions: This new web-based glucometrics data and reporting tool allows hospitals to track their performance with a flexible reporting system, and provides them with external benchmarking. Tools like this help to establish standardized glucometrics and performance standards. PMID:24876426

  8. The Criticality Safety Information Resource Center (CSIRC) at Los Alamos National Laboratory

    International Nuclear Information System (INIS)

    Henderson, B.D.; Meade, R.A.; Pruvost, N.L.

    1999-01-01

    The Criticality Safety Information Resource Center (CSIRC) at Los Alamos National Laboratory (LANL) is a program jointly funded by the U.S. Department of Energy (DOE) and the U.S. Nuclear Regulatory Commission (NRC) in conjunction with the Defense Nuclear Facilities Safety Board (DNFSB) Recommendation 97-2. The goal of CSIRC is to preserve primary criticality safety documentation from U.S. critical experimental sites and to make this information available for the benefit of the technical community. Progress in archiving criticality safety primary documents at the LANL archives as well as efforts to make this information available to researchers are discussed. The CSIRC project has a natural linkage to the International Criticality Safety Benchmark Evaluation Project (ICSBEP). This paper raises the possibility that the CSIRC project will evolve in a fashion similar to the ICSBEP. Exploring the implications of linking the CSIRC to the international criticality safety community is the motivation for this paper

  9. Description and exploitation of benchmarks involving {sup 149}Sm, a fission product taking part of the burn up credit in spent fuels

    Energy Technology Data Exchange (ETDEWEB)

    Anno, J.; Poullot, G. [CEA Centre d`Etudes de Fontenay-aux-Roses, 92 (France). Inst. de Protection et de Surete Nucleaire; Fouillaud, P.; Grivot, P. [CEA Centre d`Etudes de Valduc, 21 - Is-sur-Tille (France)

    1995-12-31

    Up to now, there was no benchmark to validate the Fission Products (FPs) cross sections in criticality safety calculations. The protection and nuclear safety institute (IPSN) has begun an experimental program on 6 FPs ({sup 103}Rh, {sup 133}Cs, {sup 143}Nd, {sup 149}Sm, {sup 152}Sm, and {sup 155}Gd daughter of {sup 155}Eu) giving alone a decrease of reactivity equal to half the whole FPs in spent fuels (except Xe and I). Here are presented the experiments with the {sup 149}Sm and the results obtained with the APOLLO I-MORET III calculations codes. 11 experiments are carried out in a zircaloy tank of 3.5 1 containing slightly nitric acid solutions of Samarium (96,9% in weight of {sup 149S}m) at 0.1048 -0.2148 - 0.6262 g/l concentrations. It was placed in the middle of arrays of UO{sub 2} rods (4.742 % U5 weight %) at square pitch of 13 mm. The underwater height of the rods is the critical parameter. In addition, 7 experiments were performed with the same apparatus with water and boron proving a good experimental representativeness and a good accuracy of the calculations. As the reactivity worth of the Sm tank is between 2000 and 6000 10{sup -5}, the benchmarks are well representative and the cumulative absorption ratios show that {sup 149}Sm is well qualified under 1 eV. (authors). 8 refs., 7 figs., 6 tabs.

  10. Comparative sensitivity study of some criticality safety benchmark experiments using JEFF-3.1.2, JEFF-3.2T and ENDF/B-VII.1

    International Nuclear Information System (INIS)

    Kooyman, Timothee; Messaoudia, Nadia

    2014-01-01

    A sensitivity study on a set of evaluated criticality benchmarks with two versions of the JEFF nuclear data library, namely JEFF-3.1.2 and JEFF-3.2T, and ENDF/B-VII.1 was performed using MNCP(X) 2.6.0. As these benchmarks serve to estimate the upper safety limit for criticality risk analysis at SCK.CEN the sensitivity of their results to nuclear data is an important parameter to asses. Several nuclides were identified as being responsible for an evident change in the effective multiplication factor k eff : 235 U, 239 Pu, 240 Pu, 54 Fe, 56 Fe, 57 Fe and 208 Pb. A high sensitivity was found to the fission cross-section of all the fissile material in the study. Additionally, a smaller sensitivity to inelastic and capture cross-section of 235 U and 240 Pu was also found. Sensitivity to the scattering law for non-fissile material was postulated. The biggest change in the k eff due to non-fissile material was due to 208 Pb evaluation (±700 pcm), followed by 56 Fe (±360 pcm) for both versions of the JEFF library. Changes due to 235 U (±300 pcm) and Pu isotopes (±120 pcm for 239 Pu and ±80 pcm for 240 Pu) were found only with JEFF-3.1.2. 238 U was found to have no effect on the k eff . Significant improvements were identified between the two versions of the JEFF library. No further differences were found between the JEFF-3.2T and the ENDF/B-VII.1 calculations involving 235 U or Pu. (authors)

  11. Report on the benchmark of products & processes and ranking of cruciality and criticity

    DEFF Research Database (Denmark)

    Islam, Aminul

    The objective of this deliverables is to present the results of benchmarking activities for each COTECH demonstrator and their planned production process. Each section is dedicated to a demonstrator mentioned below: Section 1 Innovative accommodable intra-ocular lens (BI) Section 2 Cheap substrat...... Micro socket for signal carriage of a hearing aid instruments (SONION) Section 8 Micro-cooling of electronic components (ATHERM)...

  12. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  13. Benchmark testing calculations for 232Th

    International Nuclear Information System (INIS)

    Liu Ping

    2003-01-01

    The cross sections of 232 Th from CNDC and JENDL-3.3 were processed with NJOY97.45 code in the ACE format for the continuous-energy Monte Carlo Code MCNP4C. The K eff values and central reaction rates based on CENDL-3.0, JENDL-3.3 and ENDF/B-6.2 were calculated using MCNP4C code for benchmark assembly, and the comparisons with experimental results are given. (author)

  14. Looking Past Primary Productivity: Benchmarking System Processes that Drive Ecosystem Level Responses in Models

    Science.gov (United States)

    Cowdery, E.; Dietze, M.

    2017-12-01

    As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentration are highly variable and contain a considerable amount of uncertainty. Benchmarking model predictions against data are necessary to assess their ability to replicate observed patterns, but also to identify and evaluate the assumptions causing inter-model differences. We have implemented a novel benchmarking workflow as part of the Predictive Ecosystem Analyzer (PEcAn) that is automated, repeatable, and generalized to incorporate different sites and ecological models. Building on the recent Free-Air CO2 Enrichment Model Data Synthesis (FACE-MDS) project, we used observational data from the FACE experiments to test this flexible, extensible benchmarking approach aimed at providing repeatable tests of model process representation that can be performed quickly and frequently. Model performance assessments are often limited to traditional residual error analysis; however, this can result in a loss of critical information. Models that fail tests of relative measures of fit may still perform well under measures of absolute fit and mathematical similarity. This implies that models that are discounted as poor predictors of ecological productivity may still be capturing important patterns. Conversely, models that have been found to be good predictors of productivity may be hiding error in their sub-process that result in the right answers for the wrong reasons. Our suite of tests have not only highlighted process based sources of uncertainty in model productivity calculations, they have also quantified the patterns and scale of this error. Combining these findings with PEcAn's model sensitivity analysis and variance decomposition strengthen our ability to identify which processes

  15. Integral Full Core Multi-Physics PWR Benchmark with Measured Data

    Energy Technology Data Exchange (ETDEWEB)

    Forget, Benoit; Smith, Kord; Kumar, Shikhar; Rathbun, Miriam; Liang, Jingang

    2018-04-11

    In recent years, the importance of modeling and simulation has been highlighted extensively in the DOE research portfolio with concrete examples in nuclear engineering with the CASL and NEAMS programs. These research efforts and similar efforts worldwide aim at the development of high-fidelity multi-physics analysis tools for the simulation of current and next-generation nuclear power reactors. Like all analysis tools, verification and validation is essential to guarantee proper functioning of the software and methods employed. The current approach relies mainly on the validation of single physic phenomena (e.g. critical experiment, flow loops, etc.) and there is a lack of relevant multiphysics benchmark measurements that are necessary to validate high-fidelity methods being developed today. This work introduces a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading and re-loading patterns. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from 58 instrumented assemblies. The benchmark description is now available online and has been used by many groups. However, much work remains to be done on the quantification of uncertainties and modeling sensitivities. This work aims to address these deficiencies and make this benchmark a true non-proprietary international benchmark for the validation of high-fidelity tools. This report details the BEAVRS uncertainty quantification for the first two cycle of operations and serves as the final report of the project.

  16. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  17. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  18. Thermal lattice benchmarks for testing basic evaluated data files, developed with MCNP4B

    International Nuclear Information System (INIS)

    Maucec, M.; Glumac, B.

    1996-01-01

    The development of unit cell and full reactor core models of DIMPLE S01A and TRX-1 and TRX-2 benchmark experiments, using Monte Carlo computer code MCNP4B is presented. Nuclear data from ENDF/B-V and VI version of cross-section library were used in the calculations. In addition, a comparison to results obtained with the similar models and cross-section data from the EJ2-MCNPlib library (which is based upon the JEF-2.2 evaluation) developed in IRC Petten, Netherlands is presented. The results of the criticality calculation with ENDF/B-VI data library, and a comparison to results obtained using JEF-2.2 evaluation, confirm the MCNP4B full core model of a DIMPLE reactor as a good benchmark for testing basic evaluated data files. On the other hand, the criticality calculations results obtained using the TRX full core models show less agreement with experiment. It is obvious that without additional data about the TRX geometry, our TRX models are not suitable as Monte Carlo benchmarks. (author)

  19. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  20. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  1. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  2. Summary of ORSphere critical and reactor physics measurements

    Directory of Open Access Journals (Sweden)

    Marshall Margaret A.

    2017-01-01

    Full Text Available In the early 1970s Dr. John T. Mihalczo (team leader, J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF with highly enriched uranium (HEU metal (called Oak Ridge Alloy or ORALLOY to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with the GODIVA I experiments. This critical configuration has been evaluated. Preliminary results were presented at ND2013. Since then, the evaluation was finalized and judged to be an acceptable benchmark experiment for the International Criticality Safety Benchmark Experiment Project (ICSBEP. Additionally, reactor physics measurements were performed to determine surface button worths, central void worth, delayed neutron fraction, prompt neutron decay constant, fission density and neutron importance. These measurements have been evaluated and found to be acceptable experiments and are discussed in full detail in the International Handbook of Evaluated Reactor Physics Benchmark Experiments. The purpose of this paper is to summarize all the evaluated critical and reactor physics measurements evaluations.

  3. Summary of ORSphere critical and reactor physics measurements

    Science.gov (United States)

    Marshall, Margaret A.; Bess, John D.

    2017-09-01

    In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with the GODIVA I experiments. This critical configuration has been evaluated. Preliminary results were presented at ND2013. Since then, the evaluation was finalized and judged to be an acceptable benchmark experiment for the International Criticality Safety Benchmark Experiment Project (ICSBEP). Additionally, reactor physics measurements were performed to determine surface button worths, central void worth, delayed neutron fraction, prompt neutron decay constant, fission density and neutron importance. These measurements have been evaluated and found to be acceptable experiments and are discussed in full detail in the International Handbook of Evaluated Reactor Physics Benchmark Experiments. The purpose of this paper is to summarize all the evaluated critical and reactor physics measurements evaluations.

  4. EA-MC Neutronic Calculations on IAEA ADS Benchmark 3.2

    Energy Technology Data Exchange (ETDEWEB)

    Dahlfors, Marcus [Uppsala Univ. (Sweden). Dept. of Radiation Sciences; Kadi, Yacine [CERN, Geneva (Switzerland). Emerging Energy Technologies

    2006-01-15

    The neutronics and the transmutation properties of the IAEA ADS benchmark 3.2 setup, the 'Yalina' experiment or ISTC project B-70, have been studied through an extensive amount of 3-D Monte Carlo calculations at CERN. The simulations were performed with the state-of-the-art computer code package EA-MC, developed at CERN. The calculational approach is outlined and the results are presented in accordance with the guidelines given in the benchmark description. A variety of experimental conditions and parameters are examined; three different fuel rod configurations and three types of neutron sources are applied to the system. Reactivity change effects introduced by removal of fuel rods in both central and peripheral positions are also computed. Irradiation samples located in a total of 8 geometrical positions are examined. Calculations of capture reaction rates in {sup 129}I, {sup 237}Np and {sup 243}Am samples and of fission reaction rates in {sup 235}U, {sup 237}Np and {sup 243}Am samples are presented. Simulated neutron flux densities and energy spectra as well as spectral indices inside experimental channels are also given according to benchmark specifications. Two different nuclear data libraries, JAR-95 and JENDL-3.2, are applied for the calculations.

  5. Benchmark problem for IAEA coordinated research program (CRP-3) on GCR afterheat removal. 1

    International Nuclear Information System (INIS)

    Takada, Shoji; Shiina, Yasuaki; Inagaki, Yoshiyuki; Hishida, Makoto; Sudo, Yukio

    1995-08-01

    In this report, detailed data which are necessary for the benchmark analysis of International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP-3) on 'Heat Transport and Afterheat Removal for Gas-cooled Reactors under Accident Conditions' are described concerning about the configuration and sizes of the cooling panel test apparatus, experimental data and thermal properties. The test section of the test apparatus is composed of pressure vessel (max. 450degC) containing an electric heater (max. 100kW, 600degC) and cooling panels surrounding the pressure vessel. Gas pressure is varied from vacuum to 1.0MPa in the pressure vessel. Two experimental cases are selected as benchmark problems about afterheat removal of HTGR, described as follows, The experimental conditions are vacuum inside the pressure vessel and heater output 13.14kW, and helium gas pressure 0.73MPa inside the pressure vessel and heater output 28.79kW. Benchmark problems are to calculate temperature distributions on the outer surface of pressure vessel and heat transferred to the cooling panel using the experimental data. The analytical result of temperature distribution on the pressure vessel was estimated +38degC, -29degC compared with the experimental data, and analytical result of heat transferred from the surface of pressure vessel to the cooling panel was estimated max. -11.4% compared with the experimental result by using the computational code -THANPACST2- of JAERI. (author)

  6. Design of a pre-collimator system for neutronics benchmark experiment

    International Nuclear Information System (INIS)

    Cai Xinggang; Liu Jiantao; Nie Yangbo; Bao Jie; Ruan Xichao; Lu Yanxia

    2013-01-01

    Benchmark experiment is an important means to inspect the reliability and accuracy of the evaluated nuclear data, the effect/background ratios are the important parameters to weight the quality of experimental data. In order to obtain higher effect/background ratios, a pre-collimator system was designed for benchmark experiment. This system mainly consists of a pre-collimator and a shadow cone, The MCNP-4C code was used to simulate the background spectra under various conditions, from the results we found that with the pre-collimator system have a very marked improvement in the effect/background ratios. (authors)

  7. The First Benchmarking of ITER BR Nb3Sn Strand of CNDA

    International Nuclear Information System (INIS)

    Long Feng; Liu Fang; Wu Yu; Ni Zhipeng

    2012-01-01

    According to the International Thermonuclear Experimental Reactor (ITER) Procurement Arrangement (PA) of Cable-In-Conduit Conductor (CICC) unit lengths for the Toroidal Field (TF) and Poloidal Field (PF) magnet systems of ITER, at the start of process qualification, the Domestic Agency (DA) shall be required to conduct a benchmarking of the room and low temperature acceptance tests carried out at the Strand Suppliers and/or at its Reference Laboratories designated by the ITER Organization (IO). The first benchmarking was carried out successfully in 2009. Nineteen participants from six DAs (China, European Union, Japan, South Korea, Russia, and the United States) participated in the first benchmarking. Bronze-route (BR) Nb 3 Sn strand and samples prepared by the ITER reference lab (CERN) were sent out to each participant by CERN. In this paper, the test facility and test results of the first benchmarking by the Chinese DA (CNDA) are presented.

  8. Characterization of the dynamic friction of woven fabrics: Experimental methods and benchmark results

    NARCIS (Netherlands)

    Sachs, Ulrich; Akkerman, Remko; Fetfatsidis, K.; Vidal-Sallé, E.; Schumacher, J.; Ziegmann, G.; Allaoui, S.; Hivet, G.; Maron, B.; Vanclooster, K.; Lomov, S.V.

    2014-01-01

    A benchmark exercise was conducted to compare various friction test set-ups with respect to the measured coefficients of friction. The friction was determined between Twintex®PP, a fabric of commingled yarns of glass and polypropylene filaments, and a metal surface. The same material was supplied to

  9. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of bench-marking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  10. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  11. Re-evaluation of the criticality experiments of the ''Otto Hahn Nuclear Ship'' reactor

    Energy Technology Data Exchange (ETDEWEB)

    Lengar, I.; Snoj, L.; Rogan, P.; Ravnik, M. [Jozef Stefan Institute, Ljubljana (Slovenia)

    2008-11-15

    Several series of experiments with a FDR reactor (advanced pressurized light water reactor) were performed in 1972 in the Geesthacht critical facility ANEX. The experiments were performed to test the core prior to its usage for the propulsion of the first German nuclear merchant ship ''Otto-Hahn''. In the present paper a calculational re-evaluation of the experiments is described with the use of the up-to date computer codes (Monte-Carlo code MCNP5) and nuclear data (ENDF/B-VI release 6). It is focused on the determination of uncertainties in the benchmark model of the experimental set-up, originating mainly from the limited set of information still available about the experiments. Effects of the identified uncertainties on the multiplication factor were studied. The sensitivity studies include parametric variation of material composition and geometry. The combined total uncertainty being found 0.0050 in k{sub eff}, the experiments are qualified as criticality safety benchmark experiments. (orig.)

  12. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, Mark D. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mausolff, Zander [Univ. of Florida, Gainesville, FL (United States); Weems, Zach [Univ. of Florida, Gainesville, FL (United States); Popp, Dustin [Univ. of Florida, Gainesville, FL (United States); Smith, Kristin [Univ. of Florida, Gainesville, FL (United States); Shriver, Forrest [Univ. of Florida, Gainesville, FL (United States); Goluoglu, Sedat [Univ. of Florida, Gainesville, FL (United States); Prince, Zachary [Texas A & M Univ., College Station, TX (United States); Ragusa, Jean [Texas A & M Univ., College Station, TX (United States)

    2016-08-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\cite{Rattlesnake} and the fuels performance code BISON. Other validation projects outside of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.

  13. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Mausolff, Zander; Weems, Zach; Popp, Dustin; Smith, Kristin; Shriver, Forrest; Goluoglu, Sedat; Prince, Zachary; Ragusa, Jean

    2016-01-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\citelesnake) and the fuels performance code BISON. Other validation projects outside of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.

  14. Critical experiments supporting close proximity water storage of power reactor fuel. Technical progress report, July 1, 1978-September 30, 1978

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, M.N.; Hoovler, G.S.; Eng, R.L.; Welfare, F.G.

    1978-11-01

    Experimental measurements are being taken on critical configurations of clusters of fuel rods mocking up LWR-type fuel elements in close proximity water storage. The results will serve to benchmark the computer codes used in designing nuclear power reactor fuel storage racks. KENO calculations of Cores I to VI are within two standard deviations of the measured k/sub eff/ values.

  15. Standard problem exercise to validate criticality codes for spent LWR fuel transport container calculations

    International Nuclear Information System (INIS)

    Whitesides, G.H.; Stephens, M.E.

    1984-01-01

    During the past two years, a Working Group established by the Organization for Economic Co-Operation and Development's Nuclear Energy Agency (OECD-NEA) has been developing a set of criticality benchmark problems which could be used to help establish the validity of criticality safety computer programs and their associated nuclear data for calculation of ksub(eff) for spent light water reactor (LWR) fuel transport containers. The basic goal of this effort was to identify a set of actual critical experiments which would contain the various material and geometric properties present in spent LWR transport contrainers. These data, when used by the various computational methods, are intended to demonstrate the ability of each method to accurately reproduce the experimentally measured ksub(eff) for the parameters under consideration

  16. Benchmarking and the laboratory

    Science.gov (United States)

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  17. Multiscale benchmarking of drug delivery vectors.

    Science.gov (United States)

    Summers, Huw D; Ware, Matthew J; Majithia, Ravish; Meissner, Kenith E; Godin, Biana; Rees, Paul

    2016-10-01

    Cross-system comparisons of drug delivery vectors are essential to ensure optimal design. An in-vitro experimental protocol is presented that separates the role of the delivery vector from that of its cargo in determining the cell response, thus allowing quantitative comparison of different systems. The technique is validated through benchmarking of the dose-response of human fibroblast cells exposed to the cationic molecule, polyethylene imine (PEI); delivered as a free molecule and as a cargo on the surface of CdSe nanoparticles and Silica microparticles. The exposure metrics are converted to a delivered dose with the transport properties of the different scale systems characterized by a delivery time, τ. The benchmarking highlights an agglomeration of the free PEI molecules into micron sized clusters and identifies the metric determining cell death as the total number of PEI molecules presented to cells, determined by the delivery vector dose and the surface density of the cargo. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Benchmarking for Higher Education.

    Science.gov (United States)

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  19. Analytical and experimental analysis of YALINA-Booster and YALINA-Thermal assemblies

    International Nuclear Information System (INIS)

    Kiyavitskaya, H.; Bournos, V.; Mazanik, S.; Khilmanovich, A.; Martsinkevich, B.; Routkovskaya, Ch.; Edchik, I.; Fokov, Y.; Sadovich, S.; Fedorenko, A.; Gohar, Y.; Talamo, A.

    2010-01-01

    Full text: Accelerator Driven Systems (ADS) may play an important role in future nuclear fuel cycles to reduce the longterm radiotoxicity and volume of spent nuclear fuel. It is proposed that ADS will produce energy and incinerate radioactive waste. This technology was called Accelerator Driven Transmutation Technology (ADTT). The most important problems of this technology are monitoring of a reactivity level in on-line regime, a choice of neutron spectrum appropriate for incineration of Minor Actinides (MA) and transmutation of Long Lived Fission Products (LLFP) and etc. Before the designing and construction of an installation it is necessary to carry out R and D to validate codes, nuclear data libraries and other instrumentations. The YALINA facility is designed to study the ADS physics and to investigate the transmutation reaction rates of MA and LLFP. The main objective of the YALINA benchmark is to compare the results from different calculation methods with each other and experimental data. The benchmark is based on the current YALINA facility configuration, which provides the opportunity to verify the prediction capability of the different methods. The experimental data have been obtained in the frame of the ISTC Projects B1341 'Analytical and experimental evaluation of the possibility to create a universal volume source of neutrons in the sub-critical booster assembly with low enrichment uranium fuel driven by a neutron generator' and B1732P 'Analytical and experimental evaluating the possibility of creation of universal volume source of neutrons in the sub-critical booster assembly with low enriched uranium fuel driven by the neutron generator'. In this paper a comparison of the experimental and calculated data obtained for YALINA-Booster subcritical assembly with a fuel of different enrichment and for YALINA-Thermal with a different number of control rods (216, 245 and 280) will be done.

  20. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...... applications. The present study analyses voluntary benchmarking in a public setting that is oriented towards learning. The study contributes by showing how benchmarking can be mobilised for learning and offers evidence of the effects of such benchmarking for performance outcomes. It concludes that benchmarking...... can enable learning in public settings but that this requires actors to invest in ensuring that benchmark data are directed towards improvement....

  1. Development of the criticality accident analysis code, AGNES

    International Nuclear Information System (INIS)

    Nakajima, Ken

    1989-01-01

    In the design works for the facilities which handle nuclear fuel, the evaluation of criticality accidents cannot be avoided even if their possibility is as small as negligible. In particular in the system using solution fuel like uranyl nitrate, solution has the property easily becoming dangerous form, and all the past criticality accidents occurred in the case of solution, therefore, the evaluation of criticality accidents becomes the most important item of safety analysis. When a criticality accident occurred in a solution fuel system, due to the generation and movement of radiolysis gas voids, the oscillation of power output and pressure pulses are observed. In order to evaluate the effect of criticality accidents, these output oscillation and pressure pulses must be calculated accurately. For this purpose, the development of the dynamic characteristic code AGNES (Accidentally Generated Nuclear Excursion Simulation code) was carried out. The AGNES is the reactor dynamic characteristic code having two independent void models. Modified energy model and pressure model, and as the benchmark calculation of the AGNES code, the results of the experimental analysis on the CRAC experiment are reported. (K.I.)

  2. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  3. IAEA consultants' meeting on benchmark validation of FENDL-1. Summary report

    International Nuclear Information System (INIS)

    Pashchenko, A.B.

    1996-01-01

    The present report contains the Summary of the IAEA Consultants' Meeting on ''Benchmark Validation of FENDL-1'', held at Karlsruhe, Germany, from 17 to 19 October 1995. This meeting was organized by the IAEA Nuclear Data Section (NDS) with the co-operation and assistance of local organizers of the Forschungszentrum Karlsruhe, Germany. Summarized are the conclusions and the main results of extensive benchmarking of FENDL-1 by comparing experimental data from numerous number of fusion integral experiments, to analytical predictions based on discrete ordinates as well as Monte Carlo calculations. (author). 4 refs

  4. SUMMARY OF GENERAL WORKING GROUP A+B+D: CODES BENCHMARKING.

    Energy Technology Data Exchange (ETDEWEB)

    WEI, J.; SHAPOSHNIKOVA, E.; ZIMMERMANN, F.; HOFMANN, I.

    2006-05-29

    Computer simulation is an indispensable tool in assisting the design, construction, and operation of accelerators. In particular, computer simulation complements analytical theories and experimental observations in understanding beam dynamics in accelerators. The ultimate function of computer simulation is to study mechanisms that limit the performance of frontier accelerators. There are four goals for the benchmarking of computer simulation codes, namely debugging, validation, comparison and verification: (1) Debugging--codes should calculate what they are supposed to calculate; (2) Validation--results generated by the codes should agree with established analytical results for specific cases; (3) Comparison--results from two sets of codes should agree with each other if the models used are the same; and (4) Verification--results from the codes should agree with experimental measurements. This is the summary of the joint session among working groups A, B, and D of the HI32006 Workshop on computer codes benchmarking.

  5. Critical issues and experimental examination on sawtooth and disruption physics

    International Nuclear Information System (INIS)

    Itoh, K.; Itoh, S.; Fukuyama, A.; Tsuji, S.

    1992-06-01

    The catastrophic phenomena which are associated with the major disruption and sawtooth contain three key processes: (1) Sudden acceleration of the growth of the helical deformation, (2) Central electron temperature crash, and (3) Rearrangement of the plasma current. Based on the theoretical model that the magnetic stochasticity plays a key role in these processes, the critical issues and possible experimental tests are proposed. Present experimental observations would be sufficient to study the detailed sequences and causes. Though models may not be complete the comparison with experiments improves understandings. (author)

  6. Shape memory alloys applied to improve rotor-bearing system dynamics - an experimental investigation

    DEFF Research Database (Denmark)

    Enemark, Søren; Santos, Ilmar; Savi, Marcelo A.

    2015-01-01

    passing through critical speeds. In this work, the feasibility of applying shape memory alloys to a rotating system is experimentally investigated. Shape memory alloys can change their stiffness with temperature variations and thus they may change system dynamics. Shape memory alloys also exhibit...... perturbations and mass imbalance responses of the rotor-bearing system at different temperatures and excitation frequencies are carried out to determine the dynamic behaviour of the system. The behaviour and the performance in terms of vibration reduction and system adaptability are compared against a benchmark...... configuration comprised by the same system having steel springs instead of shape memory alloy springs. The experimental results clearly show that the stiffness changes and hysteretic behaviour of the shape memory alloys springs alter system dynamics both in terms of critical speeds and mode shapes. Vibration...

  7. Benchmarking reference services: an introduction.

    Science.gov (United States)

    Marshall, J G; Buchanan, H S

    1995-01-01

    Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.

  8. Benchmark calculation for the steady-state temperature distribution of the HTR-10 under full-power operation

    International Nuclear Information System (INIS)

    Chen Fubing; Dong Yujie; Zheng Yanhua; Shi Lei; Zhang Zuoyi

    2009-01-01

    Within the framework of a Coordinated Research Project on Evaluation of High Temperature Gas-Cooled Reactor Performance (CRP-5) initiated by the International Atomic Energy Agency (IAEA), the calculation of steady-state temperature distribution of the 10 MW High Temperature Gas-Cooled Reactor-Test Module (HTR-10) under its initial full power experimental operation has been defined as one of the benchmark problems. This paper gives the investigation results obtained by different countries who participate in solving this benchmark problem. The validation works of the THERMIX code used by the Institute of Nuclear and New Energy Technology (INET) are also presented. For the benchmark items defined in this CRP, various calculation results correspond well with each other and basically agree the experimental results. Discrepancies existing among various code results are preliminarily attributed to different methods, models, material properties, and so on used in the computations. Temperatures calculated by THERMIX for the measuring points in the reactor internals agree well with the experimental values. The maximum fuel center temperatures calculated by the participants are much lower than the limited value of 1,230degC. According to the comparison results of code-to-code as well as code-to-experiment, THERMIX is considered to reproduce relatively satisfactory results for the CRP-5 benchmark problem. (author)

  9. Completion of the first approach to critical for the seven percent critical experiment

    International Nuclear Information System (INIS)

    Barber, A. D.; Harms, G. A.

    2009-01-01

    The first approach-to-critical experiment in the Seven Percent Critical Experiment series was recently completed at Sandia. This experiment is part of the Seven Percent Critical Experiment which will provide new critical and reactor physics benchmarks for fuel enrichments greater than five weight percent. The inverse multiplication method was used to determine the state of the system during the course of the experiment. Using the inverse multiplication method, it was determined that the critical experiment went slightly supercritical with 1148 fuel elements in the fuel array. The experiment is described and the results of the experiment are presented. (authors)

  10. BUGLE-93 (ENDF/B-VI) cross-section library data testing using shielding benchmarks

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; White, J.E.

    1994-01-01

    Several integral shielding benchmarks were selected to perform data testing for new multigroup cross-section libraries compiled from the ENDF/B-VI data for light water reactor (LWR) shielding and dosimetry. The new multigroup libraries, BUGLE-93 and VITAMIN-B6, were studied to establish their reliability and response to the benchmark measurements by use of radiation transport codes, ANISN and DORT. Also, direct comparisons of BUGLE-93 and VITAMIN-B6 to BUGLE-80 (ENDF/B-IV) and VITAMIN-E (ENDF/B-V) were performed. Some benchmarks involved the nuclides used in LWR shielding and dosimetry applications, and some were sensitive specific nuclear data, i.e. iron due to its dominant use in nuclear reactor systems and complex set of cross-section resonances. Five shielding benchmarks (four experimental and one calculational) are described and results are presented

  11. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  12. New Improved Nuclear Data for Nuclear Criticality and Safety

    International Nuclear Information System (INIS)

    Guber, Klaus H.; Leal, Luiz C.; Lampoudis, C.; Kopecky, S.; Schillebeeckx, P.; Emiliani, F.; Wynants, R.; Siegler, P.

    2011-01-01

    The Geel Electron Linear Accelerator (GELINA) was used to measure neutron total and capture cross sections of 182,183,184,186 W and 63,65 Cu in the energy range from 100 eV to ∼200 keV using the time-of-flight method. GELINA is the only high-power white neutron source with excellent timing resolution and ideally suited for these experiments. Concerns about the use of existing cross-section data in nuclear criticality calculations using Monte Carlo codes and benchmarks were a prime motivator for the new cross-section measurements. To support the Nuclear Criticality Safety Program, neutron cross-section measurements were initiated using GELINA at the EC-JRC-IRMM. Concerns about data deficiencies in some existing cross-section evaluations from libraries such as ENDF/B, JEFF, or JENDL for nuclear criticality calculations were the prime motivator for new cross-section measurements. Over the past years many troubles with existing nuclear data have emerged, such as problems related to proper normalization, neutron sensitivity backgrounds, poorly characterized samples, and use of improper pulse-height weighting functions. These deficiencies may occur in the resolved- and unresolved-resonance region and may lead to erroneous nuclear criticality calculations. An example is the use of the evaluated neutron cross-section data for tungsten in nuclear criticality safety calculations, which exhibit discrepancies in benchmark calculations and show the need for reliable covariance data. We measured the neutron total and capture cross sections of 182,183,184,186 W and 63,65 Cu in the neutron energy range from 100 eV to several hundred keV. This will help to improve the representation of the cross sections since most of the available evaluated data rely only on old measurements. Usually these measurements were done with poor experimental resolution or only over a very limited energy range, which is insufficient for the current application.

  13. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  14. Assessment of CANDU physics codes using experimental data - II: CANDU core physics measurements

    International Nuclear Information System (INIS)

    Roh, Gyu Hong; Jeong, Chang Joon; Choi, Hang Bok

    2001-11-01

    Benchmark calculations of the advanced CANDU reactor analysis tools (WIMS-AECL, SHETAN and RFSP) and the Monte Carlo code MCNP-4B have been performed using Wolsong Units 2 and 3 Phase-B measurement data. In this study, the benchmark calculations have been done for the criticality, boron worth, reactivity device worth, reactivity coefficient, and flux scan. For the validation of the WIMS-AECL/SHETANRFSP code system, the lattice parameters of the fuel channel were generated by the WIMS-AECL code, and incremental cross sections of reactivity devices and structural material were generated by the SHETAN code. The results have shown that the criticality is under-predicted by -4 mk. The reactivity device worths are generally consistent with the measured data except for the strong absorbers such as shutoff rod and mechanical control absorber. The heat transport system temperature coefficient and flux distributions are in good agreement with the measured data. However, the moderator temperature coefficient has shown a relatively large error, which could be caused by the incremental cross-section generation methodology for the reactivity device. For the MCNP-4B benchmark calculation, cross section libraries were newly generated from ENDF/B-VI release 3 through the NJOY97.114 data processing system and a three-dimensional full core model was developed. The simulation results have shown that the criticality is estimated within 4 mk and the estimated reactivity worth of the control devices are generally consistent with the measurement data, which implies that the MCNP code is valid for CANDU core analysis. In the future, therefore, the MCNP code could be used as a reference tool to benchmark design and analysis codes for the advanced fuels for which experimental data are not available

  15. Experimental Investigation of Burnup Credit for Safe Transport, Storage, and Disposal of Spent Nuclear Fuel

    International Nuclear Information System (INIS)

    Harms, Gary A.; Helmick, Paul H.; Ford, John T.; Walker, Sharon A.; Berry, Donald T.; Pickard, Paul S.

    2004-01-01

    This report describes criticality benchmark experiments containing rhodium that were conducted as part of a Department of Energy Nuclear Energy Research Initiative project. Rhodium is an important fission product absorber. A capability to perform critical experiments with low-enriched uranium fuel was established as part of the project. Ten critical experiments, some containing rhodium and others without, were conducted. The experiments were performed in such a way that the effects of the rhodium could be accurately isolated. The use of the experimental results to test neutronics codes is demonstrated by example for two Monte Carlo codes. These comparisons indicate that the codes predict the behavior of the rhodium in the critical systems within the experimental uncertainties. The results from this project, coupled with the results of follow-on experiments that investigate other fission products, can be used to quantify and reduce the conservatism of spent nuclear fuel safety analyses while still providing the necessary level of safety

  16. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  17. Toward Automated Benchmarking of Atomistic Force Fields: Neat Liquid Densities and Static Dielectric Constants from the ThermoML Data Archive.

    Science.gov (United States)

    Beauchamp, Kyle A; Behr, Julie M; Rustenburg, Ariën S; Bayly, Christopher I; Kroenlein, Kenneth; Chodera, John D

    2015-10-08

    Atomistic molecular simulations are a powerful way to make quantitative predictions, but the accuracy of these predictions depends entirely on the quality of the force field employed. Although experimental measurements of fundamental physical properties offer a straightforward approach for evaluating force field quality, the bulk of this information has been tied up in formats that are not machine-readable. Compiling benchmark data sets of physical properties from non-machine-readable sources requires substantial human effort and is prone to the accumulation of human errors, hindering the development of reproducible benchmarks of force-field accuracy. Here, we examine the feasibility of benchmarking atomistic force fields against the NIST ThermoML data archive of physicochemical measurements, which aggregates thousands of experimental measurements in a portable, machine-readable, self-annotating IUPAC-standard format. As a proof of concept, we present a detailed benchmark of the generalized Amber small-molecule force field (GAFF) using the AM1-BCC charge model against experimental measurements (specifically, bulk liquid densities and static dielectric constants at ambient pressure) automatically extracted from the archive and discuss the extent of data available for use in larger scale (or continuously performed) benchmarks. The results of even this limited initial benchmark highlight a general problem with fixed-charge force fields in the representation low-dielectric environments, such as those seen in binding cavities or biological membranes.

  18. Experimental study and technique for calculation of critical heat fluxes in helium boiling in tubes

    International Nuclear Information System (INIS)

    Arkhipov, V.V.; Kvasnyuk, S.V.; Deev, V.I.; Andreev, V.K.

    1979-01-01

    Studied is the effect of regime parameters on critical heat loads in helium boiling in a vertical tube in the range of mass rates of 80 2 xc) and pressures of 100<=p<=200 kPa for the vapor content range corresponding to the heat exchange crisis of the first kind. The method for calculating critical heat fluxes describing experimental data with the error less than +-15% is proposed. The critical heat loads in helium boiling in tubes reduce with the growth of pressure and vapor content in the regime parameter ranges under investigation. Both positive and negative effects of the mass rate on the critical heat flux are observed. The calculation method proposed satisfactorily describes the experimental data

  19. Benchmark calculations for critical experiments at FKBN-M facility with uranium-plutonium-polyethylene systems using JENDL-3.2 and MVP Monte-Carlo code

    International Nuclear Information System (INIS)

    Obara, Toru; Morozov, A.G.; Kevrolev, V.V.; Kuznetsov, V.V.; Treschalin, S.A.; Lukin, A.V.; Terekhin, V.A.; Sokolov, Yu.A.; Kravchenko, V.G.

    2000-01-01

    Benchmark calculations were performed for critical experiments at FKBN-M facility in RFNC-VNIITF, Russia using JENDL-3.2 nuclear data library and continuous energy Monte-Carlo code MVP. The fissile materials were high-enriched uranium and plutonium. Polyethylene was used as moderator. The neutron spectrum was changed by changing the geometry. Calculation results by MVP showed some errors. Discussion was made by reaction rates and η values obtained by MVP. It showed the possibility that cross sections of U-235 had different trend of error in fast and thermal energy region respectively. It also showed the possibility of some error of cross section of Pu-239 in high energy region. (author)

  20. Improved experimental determination of critical-point data for tungsten

    International Nuclear Information System (INIS)

    Fucke, W.; Seydel, U.

    1980-01-01

    It is shown that under certain conditions in resistive pulse-heating experiments, refractory liquid metals can be heated up to the limit of thermodynamic stability (spinodal) of the superheated liquid. Here, an explosion-like decomposition takes place which is directly monitored by measurements of expansion, surface radiation, and electric resistivity, thus allowing the determination of the temperature-pressure dependence of the spinodal transition. A comparison of the spinodal equation obtained this way with theoretical models yields the critical temperature Tsub(c), pressure psub(c), and volume vsub(c). A completely experimentally-determined set of the critical parameters for tungsten is presented: Tsub(c) = (13400 +- 1400) K, psub(c) = (3370 +- 850) bar, vsub(c) = (43 +- 4) cm 3 mol -1 . (author)

  1. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Benchmark for evaluation and validation of reactor simulations (BEAVRS)

    Energy Technology Data Exchange (ETDEWEB)

    Horelik, N.; Herman, B.; Forget, B.; Smith, K. [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States)

    2013-07-01

    Advances in parallel computing have made possible the development of high-fidelity tools for the design and analysis of nuclear reactor cores, and such tools require extensive verification and validation. This paper introduces BEAVRS, a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading patterns, and numerous in-vessel components. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from fifty-eight instrumented assemblies. Initial comparisons between calculations performed with MIT's OpenMC Monte Carlo neutron transport code and measured cycle 1 HZP test data are presented, and these results display an average deviation of approximately 100 pcm for the various critical configurations and control rod worth measurements. Computed HZP radial fission detector flux maps also agree reasonably well with the available measured data. All results indicate that this benchmark will be extremely useful in validation of coupled-physics codes and uncertainty quantification of in-core physics computational predictions. The detailed BEAVRS specification and its associated data package is hosted online at the MIT Computational Reactor Physics Group web site (http://crpg.mit.edu/), where future revisions and refinements to the benchmark specification will be made publicly available. (authors)

  3. OECD/NEA International Benchmark exercises: Validation of CFD codes applied nuclear industry; OECD/NEA internatiion Benchmark exercices: La validacion de los codigos CFD aplicados a la industria nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Pena-Monferrer, C.; Miquel veyrat, A.; Munoz-Cobo, J. L.; Chiva Vicent, S.

    2016-08-01

    In the recent years, due, among others, the slowing down of the nuclear industry, investment in the development and validation of CFD codes, applied specifically to the problems of the nuclear industry has been seriously hampered. Thus the International Benchmark Exercise (IBE) sponsored by the OECD/NEA have been fundamental to analyze the use of CFD codes in the nuclear industry, because although these codes are mature in many fields, still exist doubts about them in critical aspects of thermohydraulic calculations, even in single-phase scenarios. The Polytechnic University of Valencia (UPV) and the Universitat Jaume I (UJI), sponsored by the Nuclear Safety Council (CSN), have actively participated in all benchmark's proposed by NEA, as in the expert meetings,. In this paper, a summary of participation in the various IBE will be held, describing the benchmark itself, the CFD model created for it, and the main conclusions. (Author)

  4. World-Wide Benchmarking of ITER Nb$_{3}$Sn Strand Test Facilities

    CERN Document Server

    Jewell, MC; Takahashi, Yoshikazu; Shikov, Alexander; Devred, Arnaud; Vostner, Alexander; Liu, Fang; Wu, Yu; Jewell, Matthew C; Boutboul, Thierry; Bessette, Denis; Park, Soo-Hyeon; Isono, Takaaki; Vorobieva, Alexandra; Martovetsky, Nicolai; Seo, Kazutaka

    2010-01-01

    The world-wide procurement of Nb$_{3}$Sn and NbTi for the ITER superconducting magnet systems will involve eight to ten strand suppliers from six Domestic Agencies (DAs) on three continents. To ensure accurate and consistent measurement of the physical and superconducting properties of the composite strand, a strand test facility benchmarking effort was initiated in August 2008. The objectives of this effort are to assess and improve the superconducting strand test and sample preparation technologies at each DA and supplier, in preparation for the more than ten thousand samples that will be tested during ITER procurement. The present benchmarking includes tests for critical current (I-c), n-index, hysteresis loss (Q(hys)), residual resistivity ratio (RRR), strand diameter, Cu fraction, twist pitch, twist direction, and metal plating thickness (Cr or Ni). Nineteen participants from six parties (China, EU, Japan, South Korea, Russia, and the United States) have participated in the benchmarking. This round, cond...

  5. Validation of flexible multibody dynamics beam formulations using benchmark problems

    Energy Technology Data Exchange (ETDEWEB)

    Bauchau, Olivier A., E-mail: obauchau@umd.edu [University of Maryland (United States); Betsch, Peter [Karlsruhe Institute of Technology (Germany); Cardona, Alberto [CIMEC (UNL/Conicet) (Argentina); Gerstmayr, Johannes [Leopold-Franzens Universität Innsbruck (Austria); Jonker, Ben [University of Twente (Netherlands); Masarati, Pierangelo [Politecnico di Milano (Italy); Sonneville, Valentin [Université de Liège (Belgium)

    2016-05-15

    As the need to model flexibility arose in multibody dynamics, the floating frame of reference formulation was developed, but this approach can yield inaccurate results when elastic displacements becomes large. While the use of three-dimensional finite element formulations overcomes this problem, the associated computational cost is overwhelming. Consequently, beam models, which are one-dimensional approximations of three-dimensional elasticity, have become the workhorse of many flexible multibody dynamics codes. Numerous beam formulations have been proposed, such as the geometrically exact beam formulation or the absolute nodal coordinate formulation, to name just two. New solution strategies have been investigated as well, including the intrinsic beam formulation or the DAE approach. This paper provides a systematic comparison of these various approaches, which will be assessed by comparing their predictions for four benchmark problems. The first problem is the Princeton beam experiment, a study of the static large displacement and rotation behavior of a simple cantilevered beam under a gravity tip load. The second problem, the four-bar mechanism, focuses on a flexible mechanism involving beams and revolute joints. The third problem investigates the behavior of a beam bent in its plane of greatest flexural rigidity, resulting in lateral buckling when a critical value of the transverse load is reached. The last problem investigates the dynamic stability of a rotating shaft. The predictions of eight independent codes are compared for these four benchmark problems and are found to be in close agreement with each other and with experimental measurements, when available.

  6. Practice benchmarking in the age of targeted auditing.

    Science.gov (United States)

    Langdale, Ryan P; Holland, Ben F

    2012-11-01

    The frequency and sophistication of health care reimbursement auditing has progressed rapidly in recent years, leaving many oncologists wondering whether their private practices would survive a full-scale Office of the Inspector General (OIG) investigation. The Medicare Part B claims database provides a rich source of information for physicians seeking to understand how their billing practices measure up to their peers, both locally and nationally. This database was dissected by a team of cancer specialists to uncover important benchmarks related to targeted auditing. All critical Medicare charges, payments, denials, and service ratios in this article were derived from the full 2010 Medicare Part B claims database. Relevant claims were limited by using Medicare provider specialty codes 83 (hematology/oncology) and 90 (medical oncology), with an emphasis on claims filed from the physician office place of service (11). All charges, denials, and payments were summarized at the Current Procedural Terminology code level to drive practice benchmarking standards. A careful analysis of this data set, combined with the published audit priorities of the OIG, produced germane benchmarks from which medical oncologists can monitor, measure and improve on common areas of billing fraud, waste or abuse in their practices. Part II of this series and analysis will focus on information pertinent to radiation oncologists.

  7. Critical experiment program of heterogeneous core composed for LWR fuel rods and low enriched uranyl nitrate solution

    International Nuclear Information System (INIS)

    Miyoshi, Yoshinori; Yamamoto, Toshihiro; Watanabe, Shouichi; Nakamura, Takemi

    2003-01-01

    In order to stimulate the criticality characteristics of a dissolver in a reprocessing plant, a critical experiment program of heterogeneous cores is under going at a Static Critical Experimental Facility, STACY in Japan Atomic Energy Research Institute, JAERI. The experimental system is composed of 5w/o enriched PWR-type fuel rod array immersed in 6w/o enriched uranyl nitrate solution. First series of experiments are basic benchmark experiments on fundamental critical data in order to validate criticality calculation codes for 'general-form system' classified in the Japanese Criticality Safety Handbook, JCSHB. Second series of experiments are concerning the neutron absorber effects of fission products related to the burn-up credit Level-2. For demonstrating the reactivity effects of fission products, reactivity effects of natural elements such as Sm, Nd, Eu and 103 Rh, 133 Cs, solved in the nitrate solution are to be measured. The objective of third series of experiments is to validate the effect of gadolinium as a soluble neutron poison. Properties of temperature coefficients and kinetic parameters are also studied, since these parameters are important to evaluate the transient behavior of the criticality accident. (author)

  8. Inelastic finite element analysis of a pipe-elbow assembly (benchmark problem 2)

    Energy Technology Data Exchange (ETDEWEB)

    Knapp, H P [Internationale Atomreaktorbau GmbH (INTERATOM) Bergisch Gladbach (Germany); Prij, J [Netherlands Energy Research Foundation (ECN) Petten (Netherlands)

    1979-06-01

    In the scope of the international benchmark problem effort on piping systems, benchmark problem 2 consisting of a pipe elbow assembly, subjected to a time dependent in-plane bending moment, was analysed using the finite element program MARC. Numerical results are presented and a comparison with experimental results is made. It is concluded that the main reason for the deviation between the calculated and measured values is due to the fact that creep-plasticity interaction is not taken into account in the analysis. (author)

  9. Systematic Benchmarking of Diagnostic Technologies for an Electrical Power System

    Science.gov (United States)

    Kurtoglu, Tolga; Jensen, David; Poll, Scott

    2009-01-01

    Automated health management is a critical functionality for complex aerospace systems. A wide variety of diagnostic algorithms have been developed to address this technical challenge. Unfortunately, the lack of support to perform large-scale V&V (verification and validation) of diagnostic technologies continues to create barriers to effective development and deployment of such algorithms for aerospace vehicles. In this paper, we describe a formal framework developed for benchmarking of diagnostic technologies. The diagnosed system is the Advanced Diagnostics and Prognostics Testbed (ADAPT), a real-world electrical power system (EPS), developed and maintained at the NASA Ames Research Center. The benchmarking approach provides a systematic, empirical basis to the testing of diagnostic software and is used to provide performance assessment for different diagnostic algorithms.

  10. Criticality Experiments Performed in Saclay and Valduc Centers, France (1958-2002)

    International Nuclear Information System (INIS)

    Barbry, F.; Grivot, P.; Girault, E.; Fouillaud, P.; Cousinou, P.; Poullot, G.; Anno, J.; Bordy, J.M.; Doutriaux, D.

    2003-01-01

    Since 1958, the Commissariat a l'Energie Atomique and then the Institut de Radioprotection et de Surete Nucleaire (previously the Institut de Protection et de Surete Nucleaire) have carried out criticality experiments first in Saclay and then in the Valduc criticality laboratory. This paper is a survey of the programs conducted during the last 45 yr with the different apparatuses. This paper also gives information about plans for the future. Programs are presented following the chronology and the International Criticality Safety Benchmark Evaluation Project classification. Among the numerous series of experiments, now 22 series (corresponding to 407 configurations) have been included in the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments'

  11. A Cloud-Based Platform for Democratizing and Socializing the Benchmarking Process

    OpenAIRE

    Fuad Bajaber; Amin Shafaat; Omar Batarfi; Radwa Elshawi; Abdulrahman Altalhi; Ahmed Barnawi; Sherif Sakr

    2016-01-01

    Performances evaluation, benchmarking and re-producibility represent significant aspects for evaluating the practical impact of scientific research outcomes in the Computer Science field. In spite of all the benefits (e.g., increasing visibility, boosting impact, improving the research quality) which can be obtained from conducting comprehensive and extensive experi-mental evaluations or providing reproducible software artifacts and detailed description of experimental setup, the required eff...

  12. Summary of ORSphere Critical and Reactor Physics Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Marshall, Margaret A.; Bess, John D.

    2016-09-01

    In the early 1970s Dr. John T. Mihalczo (team leader), J. J. Lynn, and J. R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with the GODIVA I experiments. This critical configuration has been evaluated. Preliminary results were presented at ND2013. Since then, the evaluation was finalized and judged to be an acceptable benchmark experiment for the International Criticality Safety Benchmark Experiment Project (ICSBEP). Additionally, reactor physics measurements were performed to determine surface button worths, central void worth, delayed neutron fraction, prompt neutron decay constant, fission density and neutron importance. These measurements have been evaluated and found to be acceptable experiments and are discussed in full detail in the International Handbook of Evaluated Reactor Physics Benchmark Experiments. The purpose of this paper is summary summarize all the critical and reactor physics measurements evaluations and, when possible, to compare them to GODIVA experiment results.

  13. Development of common user data model for APOLLO3 and MARBLE and application to benchmark problems

    International Nuclear Information System (INIS)

    Yokoyama, Kenji

    2009-07-01

    A Common User Data Model, CUDM, has been developed for the purpose of benchmark calculations between APOLLO3 and MARBLE code systems. The current version of CUDM was designed for core calculation benchmark problems with 3-dimensional Cartesian, 3-D XYZ, geometry. CUDM is able to manage all input/output data such as 3-D XYZ geometry, effective macroscopic cross section, effective multiplication factor and neutron flux. In addition, visualization tools for geometry and neutron flux were included. CUDM was designed by the object-oriented technique and implemented using Python programming language. Based on the CUDM, a prototype system for a benchmark calculation, CUDM-benchmark, was also developed. The CUDM-benchmark supports input/output data conversion for IDT solver in APOLLO3, and TRITAC and SNT solvers in MARBLE. In order to evaluate pertinence of CUDM, the CUDM-benchmark was applied to benchmark problems proposed by T. Takeda, G. Chiba and I. Zmijarevic. It was verified that the CUDM-benchmark successfully reproduced the results calculated with reference input data files, and provided consistent results among all the solvers by using one common input data defined by CUDM. In addition, a detailed benchmark calculation for Chiba benchmark was performed by using the CUDM-benchmark. Chiba benchmark is a neutron transport benchmark problem for fast criticality assembly without homogenization. This benchmark problem consists of 4 core configurations which have different sodium void regions, and each core configuration is defined by more than 5,000 fuel/material cells. In this application, it was found that the results by IDT and SNT solvers agreed well with the reference results by Monte-Carlo code. In addition, model effects such as quadrature set effect, S n order effect and mesh size effect were systematically evaluated and summarized in this report. (author)

  14. EGS4 benchmark program

    International Nuclear Information System (INIS)

    Yasu, Y.; Hirayama, H.; Namito, Y.; Yashiro, S.

    1995-01-01

    This paper proposes EGS4 Benchmark Suite which consists of three programs called UCSAMPL4, UCSAMPL4I and XYZDOS. This paper also evaluates optimization methods of recent RISC/UNIX systems, such as IBM, HP, DEC, Hitachi and Fujitsu, for the benchmark suite. When particular compiler option and math library were included in the evaluation process, system performed significantly better. Observed performance of some of the RISC/UNIX systems were beyond some so-called Mainframes of IBM, Hitachi or Fujitsu. The computer performance of EGS4 Code System on an HP9000/735 (99MHz) was defined to be the unit of EGS4 Unit. The EGS4 Benchmark Suite also run on various PCs such as Pentiums, i486 and DEC alpha and so forth. The performance of recent fast PCs reaches that of recent RISC/UNIX systems. The benchmark programs have been evaluated with correlation of industry benchmark programs, namely, SPECmark. (author)

  15. Benchmarking study and its application for shielding analysis of large accelerator facilities

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hee-Seock; Kim, Dong-hyun; Oranj, Leila Mokhtari; Oh, Joo-Hee; Lee, Arim; Jung, Nam-Suk [POSTECH, Pohang (Korea, Republic of)

    2015-10-15

    Shielding Analysis is one of subjects which are indispensable to construct large accelerator facility. Several methods, such as the Monte Carlo, discrete ordinate, and simplified calculation, have been used for this purpose. The calculation precision is overcome by increasing the trial (history) numbers. However its accuracy is still a big issue in the shielding analysis. To secure the accuracy in the Monte Carlo calculation, the benchmarking study using experimental data and the code comparison are adopted fundamentally. In this paper, the benchmarking result for electrons, protons, and heavy ions are presented as well as the proper application of the results is discussed. The benchmarking calculations, which are indispensable in the shielding analysis were performed for different particles: proton, heavy ion and electron. Four different multi-particle Monte Carlo codes, MCNPX, FLUKA, PHITS, and MARS, were examined for higher energy range equivalent to large accelerator facility. The degree of agreement between the experimental data including the SINBAD database and the calculated results were estimated in the terms of secondary neutron production and attenuation through the concrete and iron shields. The degree of discrepancy and the features of Monte Carlo codes were investigated and the application way of the benchmarking results are discussed in the view of safety margin and selecting the code for the shielding analysis. In most cases, the tested Monte Carlo codes give proper credible results except of a few limitation of each codes.

  16. Benchmarking study and its application for shielding analysis of large accelerator facilities

    International Nuclear Information System (INIS)

    Lee, Hee-Seock; Kim, Dong-hyun; Oranj, Leila Mokhtari; Oh, Joo-Hee; Lee, Arim; Jung, Nam-Suk

    2015-01-01

    Shielding Analysis is one of subjects which are indispensable to construct large accelerator facility. Several methods, such as the Monte Carlo, discrete ordinate, and simplified calculation, have been used for this purpose. The calculation precision is overcome by increasing the trial (history) numbers. However its accuracy is still a big issue in the shielding analysis. To secure the accuracy in the Monte Carlo calculation, the benchmarking study using experimental data and the code comparison are adopted fundamentally. In this paper, the benchmarking result for electrons, protons, and heavy ions are presented as well as the proper application of the results is discussed. The benchmarking calculations, which are indispensable in the shielding analysis were performed for different particles: proton, heavy ion and electron. Four different multi-particle Monte Carlo codes, MCNPX, FLUKA, PHITS, and MARS, were examined for higher energy range equivalent to large accelerator facility. The degree of agreement between the experimental data including the SINBAD database and the calculated results were estimated in the terms of secondary neutron production and attenuation through the concrete and iron shields. The degree of discrepancy and the features of Monte Carlo codes were investigated and the application way of the benchmarking results are discussed in the view of safety margin and selecting the code for the shielding analysis. In most cases, the tested Monte Carlo codes give proper credible results except of a few limitation of each codes

  17. CEC thermal-hydraulic benchmark exercise on Fiploc verification experiment F2 in Battelle model containment. Experimental phases 2, 3 and 4. Results of comparisons

    International Nuclear Information System (INIS)

    Fischer, K.; Schall, M.; Wolf, L.

    1993-01-01

    The present final report comprises the major results of Phase II of the CEC thermal-hydraulic benchmark exercise on Fiploc verification experiment F2 in the Battelle model containment, experimental phases 2, 3 and 4, which was organized and sponsored by the Commission of the European Communities for the purpose of furthering the understanding and analysis of long-term thermal-hydraulic phenomena inside containments during and after severe core accidents. This benchmark exercise received high European attention with eight organizations from six countries participating with eight computer codes during phase 2. Altogether 18 results from computer code runs were supplied by the participants and constitute the basis for comparisons with the experimental data contained in this publication. This reflects both the high technical interest in, as well as the complexity of, this CEC exercise. Major comparison results between computations and data are reported on all important quantities relevant for containment analyses during long-term transients. These comparisons comprise pressure, steam and air content, velocities and their directions, heat transfer coefficients and saturation ratios. Agreements and disagreements are discussed for each participating code/institution, conclusions drawn and recommendations provided. The phase 2 CEC benchmark exercise provided an up-to-date state-of-the-art status review of the thermal-hydraulic capabilities of present computer codes for containment analyses. This exercise has shown that all of the participating codes can simulate the important global features of the experiment correctly, like: temperature stratification, pressure and leakage, heat transfer to structures, relative humidity, collection of sump water. Several weaknesses of individual codes were identified, and this may help to promote their development. As a general conclusion it may be said that while there is still a wide area of necessary extensions and improvements, the

  18. SCALE-4 analysis of pressurized water reactor critical configurations. Volume 1: Summary

    International Nuclear Information System (INIS)

    DeHart, M.D.

    1995-03-01

    The requirements of ANSI/ANS 8.1 specify that calculational methods for away-from-reactor criticality safety analyses be validated against experimental measurements. If credit is to be taken for the reduced reactivity of burned or spent fuel relative to its original fresh composition, it is necessary to benchmark computational methods used in determining such reactivity worth against spent fuel reactivity measurements. This report summarizes a portion of the ongoing effort to benchmark away-from-reactor criticality analysis methods using critical configurations from commercial pressurized water reactors (PWR). The analysis methodology utilized for all calculations in this report is based on the modules and data associated with the SCALE-4 code system. Each of the five volumes comprising this report provides an overview of the methodology applied. Subsequent volumes also describe in detail the approach taken in performing criticality calculations for these PWR configurations: Volume 2 describes criticality calculations for the Tennessee Valley Authority's Sequoyah Unit 2 reactor for Cycle 3; Volume 3 documents the analysis of Virginia Power's Surry Unit 1 reactor for the Cycle 2 core; Volume 4 documents the calculations performed based on GPU Nuclear Corporation's Three Mile Island Unit 1 Cycle 5 core; and, lastly, Volume 5 describes the analysis of Virginia Power's North Anna Unit 1 Cycle 5 core. Each of the reactor-specific volumes provides the details of calculations performed to determine the effective multiplication factor for each reactor core for one or more critical configurations using the SCALE-4 system; these results are summarized in this volume. Differences between the core designs and their possible impact on the criticality calculations are also discussed. Finally, results are presented for additional analyses performed to verify that solutions were sufficiently converged

  19. Reactor based plutonium disposition - physics and fuel behaviour benchmark studies of an OECD/NEA experts group

    International Nuclear Information System (INIS)

    D'Hondt, P.; Gehin, J.; Na, B.C.; Sartori, E.; Wiesenack, W.

    2001-01-01

    One of the options envisaged for disposing of weapons grade plutonium, declared surplus for national defence in the Russian Federation and Usa, is to burn it in nuclear power reactors. The scientific/technical know-how accumulated in the use of MOX as a fuel for electricity generation is of great relevance for the plutonium disposition programmes. An Expert Group of the OECD/Nea is carrying out a series of benchmarks with the aim of facilitating the use of this know-how for meeting this objective. This paper describes the background that led to establishing the Expert Group, and the present status of results from these benchmarks. The benchmark studies cover a theoretical reactor physics benchmark on a VVER-1000 core loaded with MOX, two experimental benchmarks on MOX lattices and a benchmark concerned with MOX fuel behaviour for both solid and hollow pellets. First conclusions are outlined as well as future work. (author)

  20. Burn-up Credit Criticality Safety Benchmark-Phase II-E. Impact of Isotopic Inventory Changes due to Control Rod Insertions on Reactivity and the End Effect in PWR UO2 Fuel Assemblies

    International Nuclear Information System (INIS)

    Neuber, Jens Christian; Tippl, Wolfgang; Hemptinne, Gwendoline de; Maes, Philippe; Ranta-aho, Anssu; Peneliau, Yannick; Jutier, Ludyvine; Tardy, Marcel; Reiche, Ingo; Kroeger, Helge; Nakata, Tetsuo; Armishaw, Malcom; Miller, Thomas M.

    2015-01-01

    The report describes the final results of the Phase II-E Burn-up Credit Criticality Benchmark conducted by the Expert Group on Burn-up Credit Criticality Safety. The objective of Phase II of the Burn-up Credit Criticality Safety programme is to study the impact of axial burn-up profiles of PWR UO 2 spent fuel assemblies on the reactivity of PWR UO 2 spent fuel assembly configurations. The objective of the Phase II-E benchmark was to study the impact of changes on the spent nuclear fuel isotopic composition due to control rod insertion during depletion on the reactivity and the end effect of spent fuel assemblies with realistic axial burn-up profiles for different control rod insertion depths ranging from 0 cm (no insertion) to full insertion (i.e. to the case that the fuel assemblies were exposed to control rod insertion over their full active length). For this purpose two axial burn-up profiles have been extracted from an AREVA-NP-GmbH-owned 17x17-(24+1) PWR UO 2 spent fuel assembly burn-up profile database. One profile has an average burn-up of 30 MWd/kg U, the other profile is related to an average burn-up of 50 MWd/kg U. Two profiles with different average burn-up values were selected because the shape of the burn-up profile is affected by the average burn-up and the end effect depends on the average burn-up of the fuel. The Phase II-E benchmark exercise complements the Phase II-C and Phase II-D benchmark exercises. In Phase II-D different irradiation histories were analysed using different control rod insertion histories during depletion as well as irradiation histories without control rod insertion. But in all the histories analysed a uniform distribution of the burn-up and hence a uniform distribution of the isotopic composition were assumed; and in all the histories including any usage of control rods full insertion of the control rods was assumed. In Phase II-C the impact of the asymmetry of axial burn-up profiles on the reactivity and the end effect of

  1. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  2. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    The objective of this study was to identify usage of foodservice performance measures, important activities in foodservice benchmarking, and benchmarking attitudes, beliefs, and practices by foodservice directors...

  3. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    Science.gov (United States)

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  4. Neutronic computational modeling of the ASTRA critical facility using MCNPX

    International Nuclear Information System (INIS)

    Rodriguez, L. P.; Garcia, C. R.; Milian, D.; Milian, E. E.; Brayner, C.

    2015-01-01

    The Pebble Bed Very High Temperature Reactor is considered as a prominent candidate among Generation IV nuclear energy systems. Nevertheless the Pebble Bed Very High Temperature Reactor faces an important challenge due to the insufficient validation of computer codes currently available for use in its design and safety analysis. In this paper a detailed IAEA computational benchmark announced by IAEA-TECDOC-1694 in the framework of the Coordinated Research Project 'Evaluation of High Temperature Gas Cooled Reactor (HTGR) Performance' was solved in support of the Generation IV computer codes validation effort using MCNPX ver. 2.6e computational code. In the IAEA-TECDOC-1694 were summarized a set of four calculational benchmark problems performed at the ASTRA critical facility. Benchmark problems include criticality experiments, control rod worth measurements and reactivity measurements. The ASTRA Critical Facility at the Kurchatov Institute in Moscow was used to simulate the neutronic behavior of nuclear pebble bed reactors. (Author)

  5. Experimental and Numerical Analysis of S-CO2 Critical Flow for SFR Recovery System Design

    International Nuclear Information System (INIS)

    Kim, Min Seok; Jung, Hwa-Young; Ahn, Yoonhan; Lee, Jekyoung; Lee, Jeong Ik

    2016-01-01

    This paper presents both numerical and experimental studies of the critical flow of S-CO 2 while special attention is given to the turbo-machinery seal design. A computational critical flow model is described first. The experiments were conducted to validate the critical flow model. Various conditions have been tested to study the flow characteristic and provide validation data for the model. The comparison of numerical and experimental results of S-CO 2 critical flow will be presented. In order to eliminate SWR, a concept of coupling the Supercritical CO 2 (S-CO 2 ) cycle with SFR has been proposed. It is known that for a closed system controlling the inventory is important for stable operation and achieving high efficiency. Since the S-CO 2 power cycle is a highly pressurized system, certain amount of leakage flow is inevitable in the rotating turbo-machinery via seals. To simulate the CO 2 leak flow in a turbo-machinery with higher accuracy in the future, the real gas effect and friction factor will be considered for the CO 2 critical flow model. Moreover, experimentally obtained temperature data were somewhat different from the numerically obtained temperature due to the insufficient insulation and large thermal inertia of the CO 2 critical flow facility. Insulation in connecting pipes and the low-pressure tank will be added and additional tests will be conducted

  6. Simple mathematical law benchmarks human confrontations

    Science.gov (United States)

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-01-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another – from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a ‘lone wolf'; identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds. PMID:24322528

  7. Simple mathematical law benchmarks human confrontations

    Science.gov (United States)

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  8. Analysis of the BFS-62 critical experiment. A report produced for BNFL (Joint European contribution)

    International Nuclear Information System (INIS)

    Newton, T.D.; Hosking, J.G.; Smith, P.J.

    2004-01-01

    A benchmark analysis for a hybrid UOX/MOX fuelled core of the BN-600 reactor was proposed during the first Research Co-ordination Meeting of the IAEA Co-ordinated Research Project 'Updated Codes and Methods to Reduce Calculational Uncertainties of LMFR Reactivity Effects'. Phase 5 of the benchmark focuses on validation of calculated sodium void coefficient distributions and integral reactivity coefficients by comparison with experimental measurements made in the critical facility BFS-62. The European. participation in Phase 5 of the benchmark analyses consists of a joint contribution from France (CEA Cadarache) and the UK (Serco Assurance Winfrith - sponsored by BNFL). Calculations have been performed using the ERANOS code and data system, which has been developed in the framework of the European collaboration on fast reactors. Results are presented in this paper for the sodium void reactivity effect based on calculated values of the absolute core reactivity. The spatial distribution of the void effect, determined using first order perturbation theory with the diffusion theory approximation, is also presented

  9. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  10. Experimental studies of the critical scattering of neutrons for large scattering vectors

    International Nuclear Information System (INIS)

    Ciszewski, R.

    1972-01-01

    The most recent results concerned with the critical scattering of neutrons are reviewed. The emphasis is on the so-called thermal shift, that is the shift of the main maximum in the intensity of critically scattered neutrons with temperature changes. Four theories of this phenomenon are described and their shortcomings are shown. It has been concluded that the situation is involved at present and needs further theoretical and experimental study. (S.B.)

  11. High Energy Physics (HEP) benchmark program

    International Nuclear Information System (INIS)

    Yasu, Yoshiji; Ichii, Shingo; Yashiro, Shigeo; Hirayama, Hideo; Kokufuda, Akihiro; Suzuki, Eishin.

    1993-01-01

    High Energy Physics (HEP) benchmark programs are indispensable tools to select suitable computer for HEP application system. Industry standard benchmark programs can not be used for this kind of particular selection. The CERN and the SSC benchmark suite are famous HEP benchmark programs for this purpose. The CERN suite includes event reconstruction and event generator programs, while the SSC one includes event generators. In this paper, we found that the results from these two suites are not consistent. And, the result from the industry benchmark does not agree with either of these two. Besides, we describe comparison of benchmark results using EGS4 Monte Carlo simulation program with ones from two HEP benchmark suites. Then, we found that the result from EGS4 in not consistent with the two ones. The industry standard of SPECmark values on various computer systems are not consistent with the EGS4 results either. Because of these inconsistencies, we point out the necessity of a standardization of HEP benchmark suites. Also, EGS4 benchmark suite should be developed for users of applications such as medical science, nuclear power plant, nuclear physics and high energy physics. (author)

  12. Melcor benchmarking against integral severe fuel damage tests

    Energy Technology Data Exchange (ETDEWEB)

    Madni, I.K. [Brookhaven National Lab., Upton, NY (United States)

    1995-09-01

    MELCOR is a fully integrated computer code that models all phases of the progression of severe accidents in light water reactor nuclear power plants, and is being developed for the U.S. Nuclear Regulatory Commission (NRC) by Sandia National Laboratories (SNL). Brookhaven National Laboratory (BNL) has a program with the NRC to provide independent assessment of MELCOR, and a very important part of this program is to benchmark MELCOR against experimental data from integral severe fuel damage tests and predictions of that data from more mechanistic codes such as SCDAP or SCDAP/RELAP5. Benchmarking analyses with MELCOR have been carried out at BNL for five integral severe fuel damage tests, namely, PBF SFD 1-1, SFD 14, and NRU FLHT-2, analyses, and their role in identifying areas of modeling strengths and weaknesses in MELCOR.

  13. PREMIUM - Benchmark on the quantification of the uncertainty of the physical models in the system thermal-hydraulic codes

    International Nuclear Information System (INIS)

    Skorek, Tomasz; Crecy, Agnes de

    2013-01-01

    PREMIUM (Post BEMUSE Reflood Models Input Uncertainty Methods) is an activity launched with the aim to push forward the methods of quantification of physical models uncertainties in thermal-hydraulic codes. It is endorsed by OECD/NEA/CSNI/WGAMA. The benchmark PREMIUM is addressed to all who applies uncertainty evaluation methods based on input uncertainties quantification and propagation. The benchmark is based on a selected case of uncertainty analysis application to the simulation of quench front propagation in an experimental test facility. Application to an experiment enables evaluation and confirmation of the quantified probability distribution functions on the basis of experimental data. The scope of the benchmark comprises a review of the existing methods, selection of potentially important uncertain input parameters, preliminary quantification of the ranges and distributions of the identified parameters, evaluation of the probability density function using experimental results of tests performed on FEBA test facility and confirmation/validation of the performed quantification on the basis of blind calculation of Reflood 2-D PERICLES experiment. (authors)

  14. Benchmarking Tool Kit.

    Science.gov (United States)

    Canadian Health Libraries Association.

    Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…

  15. VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, RJ

    2001-02-02

    The Task Force on Reactor-Based Plutonium Disposition, now an Expert Group, was set up through the Organization for Economic Cooperation and Development/Nuclear Energy Agency to facilitate technical assessments of burning weapons-grade plutonium mixed-oxide (MOX) fuel in U.S. pressurized-water reactors and Russian VVER nuclear reactors. More than ten countries participated to advance the work of the Task Force in a major initiative, which was a blind benchmark study to compare code benchmark calculations against experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At the Oak Ridge National Laboratory, the HELIOS-1.4 code was used to perform a comprehensive study of pin-cell and core calculations for the VENUS-2 benchmark.

  16. Validation of SCALE-4 criticality sequences using ENDF/B-V data

    International Nuclear Information System (INIS)

    Bowman, S.M.; Wright, R.Q.; DeHart, M.D.; Taniuchi, H.

    1993-01-01

    The SCALE code system developed at Oak Ridge National Laboratory contains criticality safety analysis sequences that include the KENO V.a Monte Carlo code for calculation of the effective multiplication factor. These sequences are widely used for criticality safety analyses performed both in the United States and abroad. The purpose of the current work is to validate the SCALE-4 criticality sequences with an ENDF/B-V cross-section library for future distribution with SCALE-4. The library used for this validation is a broad-group library (44 groups) collapsed from the 238-group SCALE library. Extensive data testing of both the 238-group and the 44-group libraries included 10 fast and 18 thermal CSEWG benchmarks and 5 other fast benchmarks. Both libraries contain approximately 300 nuclides and are, therefore, capable of modeling most systems, including those containing spent fuel or radioactive waste. The validation of the broad-group library used 93 critical experiments as benchmarks. The range of experiments included 60 light-water-reactor fuel rod lattices, 13 mixed-oxide fuel rod lattice, and 15 other low- and high-enriched uranium critical assemblies

  17. Recommendations for Benchmarking Preclinical Studies of Nanomedicines.

    Science.gov (United States)

    Dawidczyk, Charlene M; Russell, Luisa M; Searson, Peter C

    2015-10-01

    Nanoparticle-based delivery systems provide new opportunities to overcome the limitations associated with traditional small-molecule drug therapy for cancer and to achieve both therapeutic and diagnostic functions in the same platform. Preclinical trials are generally designed to assess therapeutic potential and not to optimize the design of the delivery platform. Consequently, progress in developing design rules for cancer nanomedicines has been slow, hindering progress in the field. Despite the large number of preclinical trials, several factors restrict comparison and benchmarking of different platforms, including variability in experimental design, reporting of results, and the lack of quantitative data. To solve this problem, we review the variables involved in the design of preclinical trials and propose a protocol for benchmarking that we recommend be included in in vivo preclinical studies of drug-delivery platforms for cancer therapy. This strategy will contribute to building the scientific knowledge base that enables development of design rules and accelerates the translation of new technologies. ©2015 American Association for Cancer Research.

  18. Role of (n, xn) reactions in ADS, IAEA-benchmark and the Dubna ...

    Indian Academy of Sciences (India)

    Dubna Cascade Code (version-2004) has been used for the Monte Carlo simulation of the 1500 MW accelerator driven sub-critical system (ADS) with 233U + 232Th fuel using the IAEA benchmark. Neutron spectrum, cross-section of (, ) reactions, isotopic yield, heat spectra etc. are simulated. Many of these results ...

  19. Summary of the First Workshop on OECD/NRC boiling water reactor turbine trip benchmark

    International Nuclear Information System (INIS)

    2000-11-01

    The reference problem chosen for simulation in a BWR is a Turbine Trip transient, which begins with a sudden Turbine Stop Valve (TSV) closure. The pressure oscillation generated in the main steam piping propagates with relatively little attenuation into the reactor core. The induced core pressure oscillation results in dramatic changes of the core void distribution and fluid flow. The magnitude of the neutron flux transient taking place in the BWR core is strongly affected by the initial rate of pressure rise caused by pressure oscillation and has a strong spatial variation. The correct simulation of the power response to the pressure pulse and subsequent void collapse requires a 3-D core modeling supplemented by 1-D simulation of the remainder of the reactor coolant system. A BWR TT benchmark exercise, based on a well-defined problem with complete set of input specifications and reference experimental data, has been proposed for qualification of the coupled 3-D neutron kinetics/thermal-hydraulic system transient codes. Since this kind of transient is a dynamically complex event with reactor variables changing very rapidly, it constitutes a good benchmark problem to test the coupled codes on both levels: neutronics/thermal-hydraulic coupling and core/plant system coupling. Subsequently, the objectives of the proposed benchmark are: comprehensive feedback testing and examination of the capability of coupled codes to analyze complex transients with coupled core/plant interactions by comparison with actual experimental data. The benchmark consists of three separate exercises: Exercise 1 - Power vs. Time Plant System Simulation with Fixed Axial Power Profile Table (Obtained from Experimental Data). Exercise 2 - Coupled 3-D Kinetics/Core Thermal-Hydraulic BC Model and/or 1-D Kinetics Plant System Simulation. Exercise 3 - Best-Estimate Coupled 3-D Core/Thermal-Hydraulic System Modeling. This first workshop was focused on technical issues connected with the first draft of

  20. Critical considerations when planning experimental in vivo studies in dental traumatology.

    Science.gov (United States)

    Andreasen, Jens O; Andersson, Lars

    2011-08-01

    In vivo studies are sometimes needed to understand healing processes after trauma. For several reasons, not the least ethical, such studies have to be carefully planned and important considerations have to be taken into account about suitability of the experimental model, sample size and optimizing the accuracy of the analysis. Several manuscripts of in vivo studies are submitted for publication to Dental Traumatology and rejected because of inadequate design, methodology or insufficient documentation of the results. The authors have substantial experience in experimental in vivo studies of tissue healing in dental traumatology and share their knowledge regarding critical considerations when planning experimental in vivo studies. © 2011 John Wiley & Sons A/S.

  1. A Global Vision over Benchmarking Process: Benchmarking Based Enterprises

    OpenAIRE

    Sitnikov, Catalina; Giurca Vasilescu, Laura

    2008-01-01

    Benchmarking uses the knowledge and the experience of others to improve the enterprise. Starting from the analysis of the performance and underlying the strengths and weaknesses of the enterprise it should be assessed what must be done in order to improve its activity. Using benchmarking techniques, an enterprise looks at how processes in the value chain are performed. The approach based on the vision “from the whole towards the parts” (a fragmented image of the enterprise’s value chain) redu...

  2. Benchmark study of some thermal and structural computer codes for nuclear shipping casks

    International Nuclear Information System (INIS)

    Ikushima, Takeshi; Kanae, Yoshioki; Shimada, Hirohisa; Shimoda, Atsumu; Halliquist, J.O.

    1984-01-01

    There are many computer codes which could be applied to the design and analysis of nuclear material shipping casks. One of problems which the designer of shipping cask faces is the decision regarding the choice of the computer codes to be used. For this situation, the thermal and structural benchmark tests for nuclear shipping casks are carried out to clarify adequacy of the calculation results. The calculation results are compared with the experimental ones. This report describes the results and discussion of the benchmark test. (author)

  3. Development and experimental qualification of the new safety-criticality CRISTAL package

    International Nuclear Information System (INIS)

    Mattera, Ch.

    1998-11-01

    This thesis is concerned with Criticality-Safety studies related to the French Nuclear Fuel Cycle. We first describe the steps in the nuclear fuel cycle and the specific characteristics of these studies compared with those performed in Reactor Physics. In order to respond to the future requirements of the French Nuclear Program, we have developed a new package CRISTAL based on a recent cross sections library (CEA 93) and the newest accurate codes (APOLLO 2, MORET 4, TRIPOLI 4). The CRISTAL system includes two calculations routes: a design route which will be used by French Industry (COGEMA/SGN) and a reference route. To transfer this package to the French industry, we have elaborated calculation schemes for fissile solutions, dissolver media, transport casks and storage pools. Afterwards, these schemes have been used for the CRISTAL experimental validation. We have also contributed to the CRISTAL experimental database by reevaluating a French storage pool experiment: the CRISTO II experiment. This revaluation has been submitted to the OECD working group in order that this experiment can be used by international criticality safety engineers to validate calculations methods. This work represents a large contribution to the recommendation of accurate calculation schemes and to the experimental validation of the CRISTAL package. These studies came up to the French Industry expectations. (author)

  4. Development and experimental testing of the new safety-criticality Cristal package

    International Nuclear Information System (INIS)

    Mattera, Ch.

    1998-01-01

    This thesis is concerned with Criticality-Safety studies related to the French Nuclear Fuel Cycle. We first describe the steps in the nuclear fuel cycle and the specific characteristics of these studies compared with those performed in Reactor Physics. In order to respond to the future requirements of the French Nuclear Program, we have developed a new package CRISTAL based on a recent cross sections library (CEA93) and the newest accurate codes (APOLLO2, MORET4, TRIPOLI4). The cristal system includes two calculations routes: a design route which will be used by French Industry (COGEMA/SGN) and a reference route.) To transfer this package to the French industry, we have elaborated calculation schemes for fissile solutions, dissolver media, transport casks and storage pools. Afterwards, these schemes have been used for the CRISTAL experimental validation. We have also contributed to the CRISTAL experimental database by reevaluating a French storage pool experiment: the CRISTO II experiment. This revaluation has been submitted to the OCDE working group in order that this experiment can be used by international criticality safety engineers to validate calculations methods. This work represents a large contribution to the recommendation of accurate calculation schemes and to the experimental validation of the CRISTAL package. These studies came up to the French Industry expectations. (author)

  5. Critical Assessment of Metagenome Interpretation

    DEFF Research Database (Denmark)

    Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter

    2017-01-01

    Methods for assembly, taxonomic profiling and binning are key to interpreting metagenome data, but a lack of consensus about benchmarking complicates performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchma...

  6. Experimental critical loadings and control rod worths in LWR-PROTEUS configurations compared with MCNPX results

    International Nuclear Information System (INIS)

    Plaschy, M.; Murphy, M.; Jatuff, F.; Seiler, R.; Chawla, R.

    2006-01-01

    The PROTEUS research reactor at the Paul Scherrer Inst. (PSI) has been operating since the sixties and has already permitted, due to its high flexibility, investigation of a large range of very different nuclear systems. Currently, the ongoing experimental programme is called LWR-PROTEUS. This programme was started in 1997 and concerns large-scale investigations of advanced light water reactors (LWR) fuels. Until now, the different LWR-PROTEUS phases have permitted to study more than fifteen different configurations, each of them having to be demonstrated to be operationally safe, in particular, for the Swiss safety authorities. In this context, recent developments of the PSI computer capabilities have made possible the use of full-scale SD-heterogeneous MCNPX models to calculate accurately different safety related parameters (e.g. the critical driver loading and the shutdown rod worth). The current paper presents the MCNPX predictions of these operational characteristics for seven different LWR-PROTEUS configurations using a large number of nuclear data libraries. More specifically, this significant benchmarking exercise is based on the ENDF/B6v2, ENDF/B6v8, JEF2.2, JEFF3.0, JENDL3.2, and JENDL3.3 libraries. The results highlight certain library specific trends in the prediction of the multiplication factor k eff (e.g. the systematically larger reactivity calculated with JEF2.2 and the smaller reactivity associated with JEFF3.0). They also confirm the satisfactory determination of reactivity variations by all calculational schemes, for instance, due to the introduction of a safety rod pair, these calculations having been compared with experiments. (authors)

  7. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  8. Synthesis of the OECD/NEA-PSI CFD benchmark exercise

    Energy Technology Data Exchange (ETDEWEB)

    Andreani, Michele, E-mail: Michele.andreani@psi.ch; Badillo, Arnoldo; Kapulla, Ralf

    2016-04-01

    Highlights: • A benchmark exercise on stratification erosion in containment was conducted using a test in the PANDA facility. • Blind calculations were provided by nineteen participants. • Results were compared with experimental data. • A ranking was made. • A large spread of results was observed, with very few simulations providing accurate results for the most important variables, though not for velocities. - Abstract: The third International Benchmark Exercise (IBE-3) conducted under the auspices of OECD/NEA is based on the comparison of blind CFD simulations with experimental data addressing the erosion of a stratified layer by an off-axis buoyant jet in a large vessel. The numerical benchmark exercise is based on a dedicated experiment in the PANDA facility conducted at the Paul Scherrer Institut (PSI) in Switzerland, using only one vessel. The use of non-prototypical fluids (i.e. helium as simulant for hydrogen, and air as simulant for steam), and the consequent absence of the complex physical effects produced by steam condensation enhanced the suitability of the data for CFD validation purposes. The test started with a helium–air layer at the top of the vessel and air in the lower part. The helium-rich layer was gradually eroded by a low-momentum air/helium jet emerging at a lower elevation. Blind calculation results were submitted by nineteen participants, and the calculation results have been compared with the PANDA data. This report, adopting the format of the reports for the two previous exercises, includes a ranking of the contributions, where the largest weight is given to the time progression of the erosion of the helium-rich layer. In accordance with the limited scope of the benchmark exercise, this report is more a collection of comparisons between calculated results and data than a synthesis. Therefore, the few conclusions are based on the mere observation of the agreement of the various submissions with the test result, and do not

  9. Validation of the ABBN/CONSYST constants system. Part 1: Validation through the critical experiments on compact metallic cores

    International Nuclear Information System (INIS)

    Ivanova, T.T.; Manturov, G.N.; Nikolaev, M.N.; Rozhikhin, E.V.; Semenov, M.Yu.; Tsiboulia, A.M.

    1999-01-01

    Worldwide compilation of criticality safety benchmark experiments, evaluated due to an activity of the International Criticality Safety Benchmark Evaluation Project (ICSBEP), discovers new possibilities for validation of the ABBN-93.1 cross section library for criticality safety analysis. Results of calculations of small assemblies with metal-fuelled cores are presented in this paper. It is concluded that ABBN-93.1 predicts criticality of such systems with required accuracy

  10. Summary of the OECD/NRC Boiling Water Reactor Turbine Trip Benchmark - Fifth Workshop (BWR-TT5)

    International Nuclear Information System (INIS)

    2003-01-01

    The reference problem chosen for simulation in a BWR is a Turbine Trip transient, which begins with a sudden Turbine Stop Valve (TSV) closure. The pressure oscillation generated in the main steam piping propagates with relatively little attenuation into the reactor core. The induced core pressure oscillation results in dramatic changes of the core void distribution and fluid flow. The magnitude of the neutron flux transient taking place in the BWR core is strongly affected by the initial rate of pressure rise caused by pressure oscillation and has a strong spatial variation. The correct simulation of the power response to the pressure pulse and subsequent void collapse requires a 3-D core modeling supplemented by 1-D simulation of the remainder of the reactor coolant system. A BWR TT benchmark exercise, based on a well-defined problem with complete set of input specifications and reference experimental data, has been proposed for qualification of the coupled 3-D neutron kinetics/thermal-hydraulic system transient codes. Since this kind of transient is a dynamically complex event with reactor variables changing very rapidly, it constitutes a good benchmark problem to test the coupled codes on both levels: neutronics/thermal-hydraulic coupling and core/plant system coupling. Subsequently, the objectives of the proposed benchmark are: comprehensive feedback testing and examination of the capability of coupled codes to analyze complex transients with coupled core/plant interactions by comparison with actual experimental data. The benchmark consists of three separate exercises: Exercise 1 - Power vs. Time Plant System Simulation with Fixed Axial Power Profile Table (Obtained from Experimental Data). Exercise 2 - Coupled 3-D Kinetics/Core Thermal-Hydraulic BC Model and/or 1-D Kinetics Plant System Simulation. Exercise 3 - Best-Estimate Coupled 3-D Core/Thermal-Hydraulic System Modeling. The purpose of this fifth workshop was to discuss the results from Phase III (best

  11. LHC benchmark scenarios for the real Higgs singlet extension of the standard model

    International Nuclear Information System (INIS)

    Robens, Tania; Stefaniak, Tim

    2016-01-01

    We present benchmark scenarios for searches for an additional Higgs state in the real Higgs singlet extension of the Standard Model in Run 2 of the LHC. The scenarios are selected such that they fulfill all relevant current theoretical and experimental constraints, but can potentially be discovered at the current LHC run. We take into account the results presented in earlier work and update the experimental constraints from relevant LHC Higgs searches and signal rate measurements. The benchmark scenarios are given separately for the low-mass and high-mass region, i.e. the mass range where the additional Higgs state is lighter or heavier than the discovered Higgs state at around 125 GeV. They have also been presented in the framework of the LHC Higgs Cross Section Working Group. (orig.)

  12. Benchmarking in Czech Higher Education

    Directory of Open Access Journals (Sweden)

    Plaček Michal

    2015-12-01

    Full Text Available The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Based on an analysis of the current situation and existing needs in the Czech Republic, as well as on a comparison with international experience, recommendations for public policy are made, which lie in the design of a model of a collaborative benchmarking for Czech economics and management in higher-education programs. Because the fully complex model cannot be implemented immediately – which is also confirmed by structured interviews with academics who have practical experience with benchmarking –, the final model is designed as a multi-stage model. This approach helps eliminate major barriers to the implementation of benchmarking.

  13. SU-F-T-152: Experimental Validation and Calculation Benchmark for a Commercial Monte Carlo Pencil BeamScanning Proton Therapy Treatment Planning System in Heterogeneous Media

    Energy Technology Data Exchange (ETDEWEB)

    Lin, L; Huang, S; Kang, M; Ainsley, C; Simone, C; McDonough, J; Solberg, T [University of Pennsylvania, Philadelphia, PA (United States)

    2016-06-15

    Purpose: Eclipse AcurosPT 13.7, the first commercial Monte Carlo pencil beam scanning (PBS) proton therapy treatment planning system (TPS), was experimentally validated for an IBA dedicated PBS nozzle in the CIRS 002LFC thoracic phantom. Methods: A two-stage procedure involving the use of TOPAS 1.3 simulations was performed. First, Geant4-based TOPAS simulations in this phantom were experimentally validated for single and multi-spot profiles at several depths for 100, 115, 150, 180, 210 and 225 MeV proton beams, using the combination of a Lynx scintillation detector and a MatriXXPT ionization chamber array. Second, benchmark calculations were performed with both AcurosPT and TOPAS in a phantom identical to the CIRS 002LFC, with the exception that the CIRS bone/mediastinum/lung tissues were replaced with similar tissues that are predefined in AcurosPT (a limitation of this system which necessitates the two stage procedure). Results: Spot sigmas measured in tissue were in agreement within 0.2 mm of TOPAS simulation for all six energies, while AcurosPT was consistently found to have larger spot sigma (<0.7 mm) than TOPAS. Using absolute dose calibration by MatriXXPT, the agreements between profiles measurements and TOPAS simulation, and calculation benchmarks are over 97% except near the end of range using 2 mm/2% gamma criteria. Overdosing and underdosing were observed at the low and high density side of tissue interfaces, respectively, and these increased with increasing depth and decreasing energy. Near the mediastinum/lung interface, the magnitude can exceed 5 mm/10%. Furthermore, we observed >5% quenching effect in the conversion of Lynx measurements to dose. Conclusion: We recommend the use of an ionization chamber array in combination with the scintillation detector to measure absolute dose and relative PBS spot characteristics. We also recommend the use of an independent Monte Carlo calculation benchmark for the commissioning of a commercial TPS. Partially

  14. ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms

    DEFF Research Database (Denmark)

    Aumüller, Martin; Bernhardsson, Erik; Faithfull, Alexander

    2017-01-01

    This paper describes ANN-Benchmarks, a tool for evaluating the performance of in-memory approximate nearest neighbor algorithms. It provides a standard interface for measuring the performance and quality achieved by nearest neighbor algorithms on different standard data sets. It supports several...... visualise these as images, Open image in new window plots, and websites with interactive plots. ANN-Benchmarks aims to provide a constantly updated overview of the current state of the art of k-NN algorithms. In the short term, this overview allows users to choose the correct k-NN algorithm and parameters...... for their similarity search task; in the longer term, algorithm designers will be able to use this overview to test and refine automatic parameter tuning. The paper gives an overview of the system, evaluates the results of the benchmark, and points out directions for future work. Interestingly, very different...

  15. Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system

    Science.gov (United States)

    Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2017-05-01

    We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.

  16. Reevaluation of the Jezebel Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-03-10

    Every nuclear engineering student is familiar with Jezebel, the homogeneous bare sphere of plutonium first assembled at Los Alamos in 1954-1955. The actual Jezebel assembly was neither homogeneous, nor bare, nor spherical; nor was it singular – there were hundreds of Jezebel configurations assembled. The Jezebel benchmark has been reevaluated for the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Logbooks, original drawings, mass accountability statements, internal reports, and published reports have been used to model four actual three-dimensional Jezebel assemblies with high fidelity. Because the documentation available today is often inconsistent, three major assumptions were made regarding plutonium part masses and dimensions. The first was that the assembly masses given in Los Alamos report LA-4208 (1969) were correct, and the second was that the original drawing dimension for the polar height of a certain major part was correct. The third assumption was that a change notice indicated on the original drawing was not actually implemented. This talk will describe these assumptions, the alternatives, and the implications. Since the publication of the 2013 ICSBEP Handbook, the actual masses of the major components have turned up. Our assumption regarding the assembly masses was proven correct, but we had the mass distribution incorrect. Work to incorporate the new information is ongoing, and this talk will describe the latest assessment.

  17. Pescara benchmark: overview of modelling, testing and identification

    International Nuclear Information System (INIS)

    Bellino, A; Garibaldi, L; Marchesiello, S; Brancaleoni, F; Gabriele, S; Spina, D; Bregant, L; Carminelli, A; Catania, G; Sorrentino, S; Di Evangelista, A; Valente, C; Zuccarino, L

    2011-01-01

    The 'Pescara benchmark' is part of the national research project 'BriViDi' (BRIdge VIbrations and DIagnosis) supported by the Italian Ministero dell'Universita e Ricerca. The project is aimed at developing an integrated methodology for the structural health evaluation of railway r/c, p/c bridges. The methodology should provide for applicability in operating conditions, easy data acquisition through common industrial instrumentation, robustness and reliability against structural and environmental uncertainties. The Pescara benchmark consisted in lab tests to get a consistent and large experimental data base and subsequent data processing. Special tests were devised to simulate the train transit effects in actual field conditions. Prestressed concrete beams of current industrial production both sound and damaged at various severity corrosion levels were tested. The results were collected either in a deterministic setting and in a form suitable to deal with experimental uncertainties. Damage identification was split in two approaches: with or without a reference model. In the first case f.e. models were used in conjunction with non conventional updating techniques. In the second case, specialized output-only identification techniques capable to deal with time-variant and possibly non linear systems were developed. The lab tests allowed validating the above approaches and the performances of classical modal based damage indicators.

  18. Benchmarking Swiss electricity grids

    International Nuclear Information System (INIS)

    Walti, N.O.; Weber, Ch.

    2001-01-01

    This extensive article describes a pilot benchmarking project initiated by the Swiss Association of Electricity Enterprises that assessed 37 Swiss utilities. The data collected from these utilities on a voluntary basis included data on technical infrastructure, investments and operating costs. These various factors are listed and discussed in detail. The assessment methods and rating mechanisms that provided the benchmarks are discussed and the results of the pilot study are presented that are to form the basis of benchmarking procedures for the grid regulation authorities under the planned Switzerland's electricity market law. Examples of the practical use of the benchmarking methods are given and cost-efficiency questions still open in the area of investment and operating costs are listed. Prefaces by the Swiss Association of Electricity Enterprises and the Swiss Federal Office of Energy complete the article

  19. Role of experimental resolution in measurements of critical layer thickness for strained-layer epitaxy

    International Nuclear Information System (INIS)

    Fritz, I.J.

    1987-01-01

    Experimental measurements of critical layer thicknesses (CLT's) in strained-layer epitaxy are considered. Finite experimental resolution can have a major effect on measured CLT's and can easily lead to spurious results. The theoretical approach to critical layer thicknesses of J. W. Matthews [J. Vac. Sci. Technol. 12, 126 (1975)] has been modified in a straightforward way to predict the apparent critical thickness for an experiment with finite resolution in lattice parameter. The theory has also been modified to account for the general empirical result that fewer misfit dislocations are generated than predicted by equilibrium calculation. The resulting expression is fit to recent x-ray diffraction data on InGaAs/GaAs and SiGe/Si. The results suggest that CLT's in these systems may not be significantly larger than predicted by equilibrium theory, in agreement with high-resolution measurements

  20. Space Weather Action Plan Ionizing Radiation Benchmarks: Phase 1 update and plans for Phase 2

    Science.gov (United States)

    Talaat, E. R.; Kozyra, J.; Onsager, T. G.; Posner, A.; Allen, J. E., Jr.; Black, C.; Christian, E. R.; Copeland, K.; Fry, D. J.; Johnston, W. R.; Kanekal, S. G.; Mertens, C. J.; Minow, J. I.; Pierson, J.; Rutledge, R.; Semones, E.; Sibeck, D. G.; St Cyr, O. C.; Xapsos, M.

    2017-12-01

    Changes in the near-Earth radiation environment can affect satellite operations, astronauts in space, commercial space activities, and the radiation environment on aircraft at relevant latitudes or altitudes. Understanding the diverse effects of increased radiation is challenging, but producing ionizing radiation benchmarks will help address these effects. The following areas have been considered in addressing the near-Earth radiation environment: the Earth's trapped radiation belts, the galactic cosmic ray background, and solar energetic-particle events. The radiation benchmarks attempt to account for any change in the near-Earth radiation environment, which, under extreme cases, could present a significant risk to critical infrastructure operations or human health. The goal of these ionizing radiation benchmarks and associated confidence levels will define at least the radiation intensity as a function of time, particle type, and energy for an occurrence frequency of 1 in 100 years and an intensity level at the theoretical maximum for the event. In this paper, we present the benchmarks that address radiation levels at all applicable altitudes and latitudes in the near-Earth environment, the assumptions made and the associated uncertainties, and the next steps planned for updating the benchmarks.

  1. Dry critical experiments and analyses performed in support of the Topaz-2 Safety Program

    International Nuclear Information System (INIS)

    Pelowitz, D.B.; Sapir, J.; Glushkov, E.S.; Ponomarev-Stepnoi, N.N.; Bubelev, V.G.; Kompanietz, G.B.; Krutov, A.M.; Polyakov, D.N.; Loynstev, V.A.

    1994-01-01

    In December 1991, the Strategic Defense Initiative Organization decided to investigate the possibility of launching a Russian Topaz-2 space nuclear power system. Functional safety requirements developed for the Topaz mission mandated that the reactor remain subcritical when flooded and immersed in water. Initial experiments and analyses performed in Russia and the United States indicated that the reactor could potentially become supercritical in several water- or sand-immersion scenarios. Consequently, a series of critical experiments was performed on the Narciss M-II facility at the Kurchatov Institute to measure the reactivity effects of water and sand immersion, to quantify the effectiveness of reactor modifications proposed to preclude criticality, and to benchmark the calculational methods and nuclear data used in the Topaz-2 safety analyses. In this paper we describe the Narciss M-II experimental configurations along with the associated calculational models and methods. We also present and compare the measured and calculated results for the dry experimental configurations

  2. Dry critical experiments and analyses performed in support of the TOPAZ-2 safety program

    International Nuclear Information System (INIS)

    Pelowitz, D.B.; Sapir, J.; Glushkov, E.S.; Ponomarev-Stepnoi, N.N.; Bubelev, V.G.; Kompanietz, G.B.; Krutov, A.M.; Polyakov, D.N.; Lobynstev, V.A.

    1995-01-01

    In December 1991, the Strategic Defense Initiative Organization decided to investigate the possibility of launching a Russian Topaz-2 space nuclear power system. Functional safety requirements developed for the Topaz mission mandated that the reactor remain subcritical when flooded and immersed in water. Initial experiments and analyses performed in Russia and the United States indicated that the reactor could potentially become supercritical in several water- or sand-immersion scenarios. Consequently, a series of critical experiments was performed on the Narciss M-II facility at the Kurchatov Institute to measure the reactivity effects of water and sand immersion, to quantify the effectiveness of reactor modifications proposed to preclude criticality, and to benchmark the calculational methods and nuclear data used in the Topaz-2 safety analyses. In this paper we describe the Narciss M-II experimental configurations along with the associated calculational models and methods. We also present and compare the measured and calculated results for the dry experimental configurations. copyright 1995 American Institute of Physics

  3. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    International Nuclear Information System (INIS)

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2015-01-01

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results

  4. Monte Carlo burnup simulation of the TAKAHAMA-3 benchmark experiment

    International Nuclear Information System (INIS)

    Dalle, Hugo M.

    2009-01-01

    High burnup PWR fuel is currently being studied at CDTN/CNEN-MG. Monte Carlo burnup code system MONTEBURNS is used to characterize the neutronic behavior of the fuel. In order to validate the code system and calculation methodology to be used in this study the Japanese Takahama-3 Benchmark was chosen, as it is the single burnup benchmark experimental data set freely available that partially reproduces the conditions of the fuel under evaluation. The burnup of the three PWR fuel rods of the Takahama-3 burnup benchmark was calculated by MONTEBURNS using the simplest infinite fuel pin cell model and also a more complex representation of an infinite heterogeneous fuel pin cells lattice. Calculations results for the mass of most isotopes of Uranium, Neptunium, Plutonium, Americium, Curium and some fission products, commonly used as burnup monitors, were compared with the Post Irradiation Examinations (PIE) values for all the three fuel rods. Results have shown some sensitivity to the MCNP neutron cross-section data libraries, particularly affected by the temperature in which the evaluated nuclear data files were processed. (author)

  5. Benchmark Analysis of Subcritical Noise Measurements on a Nickel-Reflected Plutonium Metal Sphere

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess; Jesson Hutchinson

    2009-09-01

    Subcritical experiments using californium source-driven noise analysis (CSDNA) and Feynman variance-to-mean methods were performed with an alpha-phase plutonium sphere reflected by nickel shells, up to a maximum thickness of 7.62 cm. Both methods provide means of determining the subcritical multiplication of a system containing nuclear material. A benchmark analysis of the experiments was performed for inclusion in the 2010 edition of the International Handbook of Evaluated Criticality Safety Benchmark Experiments. Benchmark models have been developed that represent these subcritical experiments. An analysis of the computed eigenvalues and the uncertainty in the experiment and methods was performed. The eigenvalues computed using the CSDNA method were very close to those calculated using MCNP5; however, computed eigenvalues are used in the analysis of the CSDNA method. Independent calculations using KENO-VI provided similar eigenvalues to those determined using the CSDNA method and MCNP5. A slight trend with increasing nickel-reflector thickness was seen when comparing MCNP5 and KENO-VI results. For the 1.27-cm-thick configuration the MCNP eigenvalue was approximately 300 pcm greater. The calculated KENO eigenvalue was about 300 pcm greater for the 7.62-cm-thick configuration. The calculated results were approximately the same for a 5-cm-thick shell. The eigenvalues determined using the Feynman method are up to approximately 2.5% lower than those determined using either the CSDNA method or the Monte Carlo codes. The uncertainty in the results from either method was not large enough to account for the bias between the two experimental methods. An ongoing investigation is being performed to assess what potential uncertainties and/or biases exist that have yet to be properly accounted for. The dominant uncertainty in the CSDNA analysis was the uncertainty in selecting a neutron cross-section library for performing the analysis of the data. The uncertainty in the

  6. One dimensional benchmark calculations using diffusion theory

    International Nuclear Information System (INIS)

    Ustun, G.; Turgut, M.H.

    1986-01-01

    This is a comparative study by using different one dimensional diffusion codes which are available at our Nuclear Engineering Department. Some modifications have been made in the used codes to fit the problems. One of the codes, DIFFUSE, solves the neutron diffusion equation in slab, cylindrical and spherical geometries by using 'Forward elimination- Backward substitution' technique. DIFFUSE code calculates criticality, critical dimensions and critical material concentrations and adjoint fluxes as well. It is used for the space and energy dependent neutron flux distribution. The whole scattering matrix can be used if desired. Normalisation of the relative flux distributions to the reactor power, plotting of the flux distributions and leakage terms for the other two dimensions have been added. Some modifications also have been made for the code output. Two Benchmark problems have been calculated with the modified version and the results are compared with BBD code which is available at our department and uses same techniques of calculation. Agreements are quite good in results such as k-eff and the flux distributions for the two cases studies. (author)

  7. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  8. Preparation of data for criticality safety evaluation of nuclear fuel cycle facilities

    International Nuclear Information System (INIS)

    Okuno, Hiroshi; Suyama, Kenya; Yoshiyama, Hiroshi; Tonoike, Kotaro; Miyoshi, Yoshinori

    2005-01-01

    Nuclear Criticality Safety Handbook/Data Collection, Version 2 was submitted to the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan as a contract report. In this presentation paper, its outline and related recent works are presented. After an introduction in Chapter 1, useful information to obtain the atomic number densities was collected in Chapter 2. The nuclear characteristic parameters for 11 nuclear fuels were provided in Chapter 3, and subcriticality judgment graphs were given in Chapter 4. The estimated critical and estimated lower-limit critical values were supplied for the 11 nuclear fuels as results of calculations by using the Japanese Evaluated Nuclear Data Library, JENDL-3.2, and the continuous energy Monte Carlo neutron transport code MVP in Chapter 5. The results of benchmark calculations based on the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook were summarized into six fuel categories in Chapter 6. As for recent works, subcriticality judgment graphs for U-SiO 2 and Pu-SiO 2 were obtained. Benchmark calculations were made with the combination of the latest version of the library JENDL-3.3 and MVP code for a series of STACY experiments and the estimated critical and estimated lower-limit critical values of 10 wt%-enriched uranium nitrate solutions were calculated. (author)

  9. Derivation of the critical effect size/benchmark response for the dose-response analysis of the uptake of radioactive iodine in the human thyroid.

    Science.gov (United States)

    Weterings, Peter J J M; Loftus, Christine; Lewandowski, Thomas A

    2016-08-22

    Potential adverse effects of chemical substances on thyroid function are usually examined by measuring serum levels of thyroid-related hormones. Instead, recent risk assessments for thyroid-active chemicals have focussed on iodine uptake inhibition, an upstream event that by itself is not necessarily adverse. Establishing the extent of uptake inhibition that can be considered de minimis, the chosen benchmark response (BMR), is therefore critical. The BMR values selected by two international advisory bodies were 5% and 50%, a difference that had correspondingly large impacts on the estimated risks and health-based guidance values that were established. Potential treatment-related inhibition of thyroidal iodine uptake is usually determined by comparing thyroidal uptake of radioactive iodine (RAIU) during treatment with a single pre-treatment RAIU value. In the present study it is demonstrated that the physiological intra-individual variation in iodine uptake is much larger than 5%. Consequently, in-treatment RAIU values, expressed as a percentage of the pre-treatment value, have an inherent variation, that needs to be considered when conducting dose-response analyses. Based on statistical and biological considerations, a BMR of 20% is proposed for benchmark dose analysis of human thyroidal iodine uptake data, to take the inherent variation in relative RAIU data into account. Implications for the tolerated daily intakes for perchlorate and chlorate, recently established by the European Food Safety Authority (EFSA), are discussed. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.

  10. Parallel Ada benchmarks for the SVMS

    Science.gov (United States)

    Collard, Philippe E.

    1990-01-01

    The use of parallel processing paradigm to design and develop faster and more reliable computers appear to clearly mark the future of information processing. NASA started the development of such an architecture: the Spaceborne VHSIC Multi-processor System (SVMS). Ada will be one of the languages used to program the SVMS. One of the unique characteristics of Ada is that it supports parallel processing at the language level through the tasking constructs. It is important for the SVMS project team to assess how efficiently the SVMS architecture will be implemented, as well as how efficiently Ada environment will be ported to the SVMS. AUTOCLASS II, a Bayesian classifier written in Common Lisp, was selected as one of the benchmarks for SVMS configurations. The purpose of the R and D effort was to provide the SVMS project team with the version of AUTOCLASS II, written in Ada, that would make use of Ada tasking constructs as much as possible so as to constitute a suitable benchmark. Additionally, a set of programs was developed that would measure Ada tasking efficiency on parallel architectures as well as determine the critical parameters influencing tasking efficiency. All this was designed to provide the SVMS project team with a set of suitable tools in the development of the SVMS architecture.

  11. Results of LWR core transient benchmarks

    International Nuclear Information System (INIS)

    Finnemann, H.; Bauer, H.; Galati, A.; Martinelli, R.

    1993-10-01

    LWR core transient (LWRCT) benchmarks, based on well defined problems with a complete set of input data, are used to assess the discrepancies between three-dimensional space-time kinetics codes in transient calculations. The PWR problem chosen is the ejection of a control assembly from an initially critical core at hot zero power or at full power, each for three different geometrical configurations. The set of problems offers a variety of reactivity excursions which efficiently test the coupled neutronic/thermal - hydraulic models of the codes. The 63 sets of submitted solutions are analyzed by comparison with a nodal reference solution defined by using a finer spatial and temporal resolution than in standard calculations. The BWR problems considered are reactivity excursions caused by cold water injection and pressurization events. In the present paper, only the cold water injection event is discussed and evaluated in some detail. Lacking a reference solution the evaluation of the 8 sets of BWR contributions relies on a synthetic comparative discussion. The results of this first phase of LWRCT benchmark calculations are quite satisfactory, though there remain some unresolved issues. It is therefore concluded that even more challenging problems can be successfully tackled in a suggested second test phase. (authors). 46 figs., 21 tabs., 3 refs

  12. Disaster metrics: quantitative benchmarking of hospital surge capacity in trauma-related multiple casualty events.

    Science.gov (United States)

    Bayram, Jamil D; Zuabi, Shawki; Subbarao, Italo

    2011-06-01

    Hospital surge capacity in multiple casualty events (MCE) is the core of hospital medical response, and an integral part of the total medical capacity of the community affected. To date, however, there has been no consensus regarding the definition or quantification of hospital surge capacity. The first objective of this study was to quantitatively benchmark the various components of hospital surge capacity pertaining to the care of critically and moderately injured patients in trauma-related MCE. The second objective was to illustrate the applications of those quantitative parameters in local, regional, national, and international disaster planning; in the distribution of patients to various hospitals by prehospital medical services; and in the decision-making process for ambulance diversion. A 2-step approach was adopted in the methodology of this study. First, an extensive literature search was performed, followed by mathematical modeling. Quantitative studies on hospital surge capacity for trauma injuries were used as the framework for our model. The North Atlantic Treaty Organization triage categories (T1-T4) were used in the modeling process for simplicity purposes. Hospital Acute Care Surge Capacity (HACSC) was defined as the maximum number of critical (T1) and moderate (T2) casualties a hospital can adequately care for per hour, after recruiting all possible additional medical assets. HACSC was modeled to be equal to the number of emergency department beds (#EDB), divided by the emergency department time (EDT); HACSC = #EDB/EDT. In trauma-related MCE, the EDT was quantitatively benchmarked to be 2.5 (hours). Because most of the critical and moderate casualties arrive at hospitals within a 6-hour period requiring admission (by definition), the hospital bed surge capacity must match the HACSC at 6 hours to ensure coordinated care, and it was mathematically benchmarked to be 18% of the staffed hospital bed capacity. Defining and quantitatively benchmarking the

  13. Validating analysis methodologies used in burnup credit criticality calculations

    International Nuclear Information System (INIS)

    Brady, M.C.; Napolitano, D.G.

    1992-01-01

    The concept of allowing reactivity credit for the depleted (or burned) state of pressurized water reactor fuel in the licensing of spent fuel facilities introduces a new challenge to members of the nuclear criticality community. The primary difference in this analysis approach is the technical ability to calculate spent fuel compositions (or inventories) and to predict their effect on the system multiplication factor. Isotopic prediction codes are used routinely for in-core physics calculations and the prediction of radiation source terms for both thermal and shielding analyses, but represent an innovation for criticality specialists. This paper discusses two methodologies currently being developed to specifically evaluate isotopic composition and reactivity for the burnup credit concept. A comprehensive approach to benchmarking and validating the methods is also presented. This approach involves the analysis of commercial reactor critical data, fuel storage critical experiments, chemical assay isotopic data, and numerical benchmark calculations

  14. Achieving palliative care research efficiency through defining and benchmarking performance metrics.

    Science.gov (United States)

    Lodato, Jordan E; Aziz, Noreen; Bennett, Rachael E; Abernethy, Amy P; Kutner, Jean S

    2012-12-01

    Research efficiency is gaining increasing attention in the research enterprise, including palliative care research. The importance of generating meaningful findings and translating these scientific advances to improved patient care creates urgency in the field to address well documented system inefficiencies. The Palliative Care Research Cooperative Group (PCRC) provides useful examples for ensuring research efficiency in palliative care. Literature on maximizing research efficiency focuses on the importance of clearly delineated process maps, working instructions, and standard operating procedures in creating synchronicity in expectations across research sites. Examples from the PCRC support these objectives and suggest that early creation and employment of performance metrics aligned with these processes are essential to generate clear expectations and identify benchmarks. These benchmarks are critical in effective monitoring and ultimately the generation of high-quality findings that are translatable to clinical populations. Prioritization of measurable goals and tasks to ensure that activities align with programmatic aims is critical. Examples from the PCRC affirm and expand the existing literature on research efficiency, providing a palliative care focus. Operating procedures, performance metrics, prioritization, and monitoring for success should all be informed by and inform the process map to achieve maximum research efficiency.

  15. Automated benchmarking of peptide-MHC class I binding predictions

    Science.gov (United States)

    Trolle, Thomas; Metushi, Imir G.; Greenbaum, Jason A.; Kim, Yohan; Sidney, John; Lund, Ole; Sette, Alessandro; Peters, Bjoern; Nielsen, Morten

    2015-01-01

    Motivation: Numerous in silico methods predicting peptide binding to major histocompatibility complex (MHC) class I molecules have been developed over the last decades. However, the multitude of available prediction tools makes it non-trivial for the end-user to select which tool to use for a given task. To provide a solid basis on which to compare different prediction tools, we here describe a framework for the automated benchmarking of peptide-MHC class I binding prediction tools. The framework runs weekly benchmarks on data that are newly entered into the Immune Epitope Database (IEDB), giving the public access to frequent, up-to-date performance evaluations of all participating tools. To overcome potential selection bias in the data included in the IEDB, a strategy was implemented that suggests a set of peptides for which different prediction methods give divergent predictions as to their binding capability. Upon experimental binding validation, these peptides entered the benchmark study. Results: The benchmark has run for 15 weeks and includes evaluation of 44 datasets covering 17 MHC alleles and more than 4000 peptide-MHC binding measurements. Inspection of the results allows the end-user to make educated selections between participating tools. Of the four participating servers, NetMHCpan performed the best, followed by ANN, SMM and finally ARB. Availability and implementation: Up-to-date performance evaluations of each server can be found online at http://tools.iedb.org/auto_bench/mhci/weekly. All prediction tool developers are invited to participate in the benchmark. Sign-up instructions are available at http://tools.iedb.org/auto_bench/mhci/join. Contact: mniel@cbs.dtu.dk or bpeters@liai.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25717196

  16. MFTF TOTAL benchmark

    International Nuclear Information System (INIS)

    Choy, J.H.

    1979-06-01

    A benchmark of the TOTAL data base management system as applied to the Mirror Fusion Test Facility (MFTF) data base was implemented and run in February and March of 1979. The benchmark was run on an Interdata 8/32 and involved the following tasks: (1) data base design, (2) data base generation, (3) data base load, and (4) develop and implement programs to simulate MFTF usage of the data base

  17. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  18. Comparison of the results of the fifth dynamic AER benchmark-a benchmark for coupled thermohydraulic system/three-dimensional hexagonal kinetic core models

    International Nuclear Information System (INIS)

    Kliem, S.

    1998-01-01

    The fifth dynamic benchmark was defined at seventh AER-Symposium, held in Hoernitz, Germany in 1997. It is the first benchmark for coupled thermohydraulic system/three-dimensional hexagonal neutron kinetic core models. In this benchmark the interaction between the components of a WWER-440 NPP with the reactor core has been investigated. The initiating event is a symmetrical break of the main steam header at the end of the first fuel cycle and hot shutdown conditions with one control rod group stucking. This break causes an overcooling of the primary circuit. During this overcooling the scram reactivity is compensated and the scrammed reactor becomes re critical. The calculation was continued until the highly-borated water from the high pressure injection system terminated the power excursion. Each participant used own best-estimate nuclear cross section data. Only the initial subcriticality at the beginning of the transient was given. Solutions were received from Kurchatov Institute Russia with the code BIPR8/ATHLET, VTT Energy Finland with HEXTRAN/SMABRE, NRI Rez Czech Republic with DYN3/ATHLET, KFKI Budapest Hungary with KIKO3D/ATHLET and from FZR Germany with the code DYN3D/ATHLET.In this paper the results are compared. Beside the comparison of global results, the behaviour of several thermohydraulic and neutron kinetic parameters is presented to discuss the revealed differences between the solutions.(Authors)

  19. Experiments for IFR fuel criticality in ZPPR-21

    International Nuclear Information System (INIS)

    Olsen, D.N.; Collins, P.J.; Carpenter, S.G.

    1991-01-01

    A series of benchmark measurements was made in ZPPR-21 to validate criticality calculations for fuel processing operations for Argonne's Integral Fast Reactor program. Six different mixtures of Pu/U/Zr fuel with a graphite reflector were built and criticality was determined by period measurements. The assemblies were isolated from room return neutrons by a lithium hydride shield. Analysis was done using a fully-detailed model with the VIM Monte Carlo code and ENDF/B-V.2 data. Sensitivity analysis was used to validate the measurements against other benchmark data. A simple RZ model was defined and used with the KENO code. Corrections to the RZ model were provided by the VIM calculations with low statistical uncertainty. (Author)

  20. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...

  1. Theoretical and experimental studies on critical heat flux in subcooled boiling and vertical flow geometry

    International Nuclear Information System (INIS)

    Staron, E.

    1996-01-01

    Critical Heat Flux is a very important subject of interest due to design, operation and safety analysis of nuclear power plants. Every new design of the core must be thoroughly checked. Experimental studies have been performed using freon as a working fluid. The possibility of transferring of results into water equivalents has been proved. The experimental study covers vertical flow, annular geometry over a wide range of pressure, mass flow and temperature at inlet of test section. Theoretical models of Critical Heat Flux have been presented but only those which cover DNB. Computer programs allowing for numerical calculations using theoretical models have been developed. A validation of the theoretical models has been performed in accordance with experimental results. (author). 83 refs, 32 figs, 4 tabs

  2. The VENUS-7 benchmarks. Results from state-of-the-art transport codes and nuclear data

    International Nuclear Information System (INIS)

    Zwermann, Winfried; Pautz, Andreas; Timm, Wolf

    2010-01-01

    For the validation of both nuclear data and computational methods, comparisons with experimental data are necessary. Most advantageous are assemblies where not only the multiplication factors or critical parameters were measured, but also additional quantities like reactivity differences or pin-wise fission rate distributions have been assessed. Currently there is a comprehensive activity to evaluate such measure-ments and incorporate them in the International Handbook of Evaluated Reactor Physics Benchmark Experiments. A large number of such experiments was performed at the VENUS zero power reactor at SCK/CEN in Belgium in the sixties and seventies. The VENUS-7 series was specified as an international benchmark within the OECD/NEA Working Party on Scientific Issues of Reactor Systems (WPRS), and results obtained with various codes and nuclear data evaluations were summarized. In the present paper, results of high-accuracy transport codes with full spatial resolution with up-to-date nuclear data libraries from the JEFF and ENDF/B evaluations are presented. The comparisons of the results, both code-to-code and with the measured data are augmented by uncertainty and sensitivity analyses with respect to nuclear data uncertainties. For the multiplication factors, these are performed with the TSUNAMI-3D code from the SCALE system. In addition, uncertainties in the reactivity differences are analyzed with the TSAR code which is available from the current SCALE-6 version. (orig.)

  3. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts to...... contribution to the discussions within the Eusponsored BEST Thematic Network (Benchmarking European Sustainable Transport) which ran from 2000 to 2003....

  4. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind......Change in construction is high on the agenda for the Danish government and a comprehensive effort is done in improving quality and efficiency. This has led to an initiated governmental effort in bringing benchmarking into the Danish construction sector. This paper is an appraisal of benchmarking...... as it is presently carried out in the Danish construction sector. Many different perceptions of benchmarking and the nature of the construction sector, lead to an uncertainty in how to perceive and use benchmarking, hence, generating an uncertainty in understanding the effects of benchmarking. This paper addresses...

  5. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  6. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  7. Benchmarking the energy efficiency of commercial buildings

    International Nuclear Information System (INIS)

    Chung, William; Hui, Y.V.; Lam, Y. Miu

    2006-01-01

    Benchmarking energy-efficiency is an important tool to promote the efficient use of energy in commercial buildings. Benchmarking models are mostly constructed in a simple benchmark table (percentile table) of energy use, which is normalized with floor area and temperature. This paper describes a benchmarking process for energy efficiency by means of multiple regression analysis, where the relationship between energy-use intensities (EUIs) and the explanatory factors (e.g., operating hours) is developed. Using the resulting regression model, these EUIs are then normalized by removing the effect of deviance in the significant explanatory factors. The empirical cumulative distribution of the normalized EUI gives a benchmark table (or percentile table of EUI) for benchmarking an observed EUI. The advantage of this approach is that the benchmark table represents a normalized distribution of EUI, taking into account all the significant explanatory factors that affect energy consumption. An application to supermarkets is presented to illustrate the development and the use of the benchmarking method

  8. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Directory of Open Access Journals (Sweden)

    Shane Ó Conchúir

    Full Text Available The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  9. Solution of the fifth dynamic Atomic Energy Research benchmark problem using the coupled code DIN3/ATHLET

    International Nuclear Information System (INIS)

    Kliem, S.

    1998-01-01

    The fifth dynamic benchmark is the first benchmark for coupled thermohydraulic system/three dimensional hexagonal neutron kinetic core models. In this benchmark the interaction between the components of a WWER-440 NPP with the reactor core has been investigated. The initiating event is a symmetrical break of the main steam header at the end of the first fuel cycle and the shutdown conditions with one control rod group s tucking. This break causes an overcooling of the primary circuit. During this overcooling the scram reactivity is compensated and the scrammed reactor becomes re critical. The calculation was continued until the highly-borated water from the high pressure injection system terminated the power excursion. Several aspects of the very complex and complicated benchmark problem are analyzed in detail. Sensitivity studies with different hydraulic parameters are made. The influence on the course of the transient and on the solution is discussed.(Author)

  10. SKaMPI: A Comprehensive Benchmark for Public Benchmarking of MPI

    Directory of Open Access Journals (Sweden)

    Ralf Reussner

    2002-01-01

    Full Text Available The main objective of the MPI communication library is to enable portable parallel programming with high performance within the message-passing paradigm. Since the MPI standard has no associated performance model, and makes no performance guarantees, comprehensive, detailed and accurate performance figures for different hardware platforms and MPI implementations are important for the application programmer, both for understanding and possibly improving the behavior of a given program on a given platform, as well as for assuring a degree of predictable behavior when switching to another hardware platform and/or MPI implementation. We term this latter goal performance portability, and address the problem of attaining performance portability by benchmarking. We describe the SKaMPI benchmark which covers a large fraction of MPI, and incorporates well-accepted mechanisms for ensuring accuracy and reliability. SKaMPI is distinguished among other MPI benchmarks by an effort to maintain a public performance database with performance data from different hardware platforms and MPI implementations.

  11. Uncertainties in criticality analysis which affect the storage and transportation of LWR fuel

    International Nuclear Information System (INIS)

    Napolitani, D.G.

    1989-01-01

    Satisfying the design criteria for subcriticality with uncertainties affects: the capacity of LWR storage arrays, maximum allowable enrichment, minimum allowable burnup and economics of various storage options. There are uncertainties due to: calculational method, data libraries, geometric limitations, modelling bias, the number and quality of benchmarks performed and mechanical uncertainties in the array. Yankee Atomic Electric Co. (YAEC) has developed and benchmarked methods to handle: high density storage rack designs, pin consolidation, low density moderation and burnup credit. The uncertainties associated with such criticality analysis are quantified on the basis of clean criticals, power reactor criticals and intercomparison of independent analysis methods

  12. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth

  13. Evaluation and validation of criticality codes for fuel dissolver calculations

    International Nuclear Information System (INIS)

    Santamarina, A.; Smith, H.J.; Whitesides, G.E.

    1991-01-01

    During the past ten years an OECD/NEA Criticality Working Group has examined the validity of criticality safety computational methods. International calculation tools which were shown to be valid in systems for which experimental data existed were demonstrated to be inadequate when extrapolated to fuel dissolver media. A theoretical study of the main physical parameters involved in fuel dissolution calculations was performed, i.e. range of moderation, variation of pellet size and the fuel double heterogeneity effect. The APOLLO/P IC method developed to treat this latter effect permits us to supply the actual reactivity variation with pellet dissolution and to propose international reference values. The disagreement among contributors' calculations was analyzed through a neutron balance breakdown, based on three-group microscopic reaction rates. The results pointed out that fast and resonance nuclear data in criticality codes are not sufficiently reliable. Moreover the neutron balance analysis emphasized the inadequacy of the standard self-shielding formalism to account for 238 U resonance mutual self-shielding in the pellet-fissile liquor interaction. The benchmark exercise has resolved a potentially dangerous inadequacy in dissolver calculations. (author)

  14. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  15. Electricity consumption in school buildings - benchmark and web tools; Elforbrug i skoler - benchmark og webvaerktoej

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    The aim of this project has been to produce benchmarks for electricity consumption in Danish schools in order to encourage electricity conservation. An internet programme has been developed with the aim of facilitating schools' access to benchmarks and to evaluate energy consumption. The overall purpose is to create increased attention to the electricity consumption of each separate school by publishing benchmarks which take the schools' age and number of pupils as well as after school activities into account. Benchmarks can be used to make green accounts and work as markers in e.g. energy conservation campaigns, energy management and for educational purposes. The internet tool can be found on www.energiguiden.dk. (BA)

  16. Benchmark Modeling of the Near-Field and Far-Field Wave Effects of Wave Energy Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Rhinefrank, Kenneth E; Haller, Merrick C; Ozkan-Haller, H Tuba

    2013-01-26

    This project is an industry-led partnership between Columbia Power Technologies and Oregon State University that will perform benchmark laboratory experiments and numerical modeling of the near-field and far-field impacts of wave scattering from an array of wave energy devices. These benchmark experimental observations will help to fill a gaping hole in our present knowledge of the near-field effects of multiple, floating wave energy converters and are a critical requirement for estimating the potential far-field environmental effects of wave energy arrays. The experiments will be performed at the Hinsdale Wave Research Laboratory (Oregon State University) and will utilize an array of newly developed Buoys' that are realistic, lab-scale floating power converters. The array of Buoys will be subjected to realistic, directional wave forcing (1:33 scale) that will approximate the expected conditions (waves and water depths) to be found off the Central Oregon Coast. Experimental observations will include comprehensive in-situ wave and current measurements as well as a suite of novel optical measurements. These new optical capabilities will include imaging of the 3D wave scattering using a binocular stereo camera system, as well as 3D device motion tracking using a newly acquired LED system. These observing systems will capture the 3D motion history of individual Buoys as well as resolve the 3D scattered wave field; thus resolving the constructive and destructive wave interference patterns produced by the array at high resolution. These data combined with the device motion tracking will provide necessary information for array design in order to balance array performance with the mitigation of far-field impacts. As a benchmark data set, these data will be an important resource for testing of models for wave/buoy interactions, buoy performance, and far-field effects on wave and current patterns due to the presence of arrays. Under the proposed project we will initiate

  17. Experimental and Numerical Analysis of S-CO{sub 2} Critical Flow for SFR Recovery System Design

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Min Seok; Jung, Hwa-Young; Ahn, Yoonhan; Lee, Jekyoung; Lee, Jeong Ik [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2016-05-15

    This paper presents both numerical and experimental studies of the critical flow of S-CO{sub 2} while special attention is given to the turbo-machinery seal design. A computational critical flow model is described first. The experiments were conducted to validate the critical flow model. Various conditions have been tested to study the flow characteristic and provide validation data for the model. The comparison of numerical and experimental results of S-CO{sub 2} critical flow will be presented. In order to eliminate SWR, a concept of coupling the Supercritical CO{sub 2} (S-CO{sub 2}) cycle with SFR has been proposed. It is known that for a closed system controlling the inventory is important for stable operation and achieving high efficiency. Since the S-CO{sub 2} power cycle is a highly pressurized system, certain amount of leakage flow is inevitable in the rotating turbo-machinery via seals. To simulate the CO{sub 2} leak flow in a turbo-machinery with higher accuracy in the future, the real gas effect and friction factor will be considered for the CO{sub 2} critical flow model. Moreover, experimentally obtained temperature data were somewhat different from the numerically obtained temperature due to the insufficient insulation and large thermal inertia of the CO{sub 2} critical flow facility. Insulation in connecting pipes and the low-pressure tank will be added and additional tests will be conducted.

  18. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  19. HS06 benchmark for an ARM server

    International Nuclear Information System (INIS)

    Kluth, Stefan

    2014-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  20. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type of external...

  1. Benchmarking in Czech Higher Education

    OpenAIRE

    Plaček Michal; Ochrana František; Půček Milan

    2015-01-01

    The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Base...

  2. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  3. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  4. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    Science.gov (United States)

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  5. Benchmarking of the PHOENIX-P/ANC [Advanced Nodal Code] advanced nuclear design system

    International Nuclear Information System (INIS)

    Nguyen, T.Q.; Liu, Y.S.; Durston, C.; Casadei, A.L.

    1988-01-01

    At Westinghouse, an advanced neutronic methods program was designed to improve the quality of the predictions, enhance flexibility in designing advanced fuel and related products, and improve design lead time. Extensive benchmarking data is presented to demonstrate the accuracy of the Advanced Nodal Code (ANC) and the PHOENIX-P advanced lattice code. Qualification data to demonstrate the accuracy of ANC include comparison of key physics parameters against a fine-mesh diffusion theory code, TORTISE. Benchmarking data to demonstrate the validity of the PHOENIX-P methodologies include comparison of physics predictions against critical experiments, isotopics measurements and measured power distributions from spatial criticals. The accuracy of the PHOENIX-P/ANC Advanced Design System is demonstrated by comparing predictions of hot zero power physics parameters and hot full power core follow against measured data from operating reactors. The excellent performance of this system for a broad range of comparisons establishes the basis for implementation of these tools for core design, licensing and operational follow of PWR [pressurized water reactor] cores at Westinghouse

  6. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts: this shape is considered as a fixed parameter in the benchmarking...

  7. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  8. Benchmark comparisons of evaluated nuclear data files

    International Nuclear Information System (INIS)

    Resler, D.A.; Howerton, R.J.; White, R.M.

    1994-05-01

    With the availability and maturity of several evaluated nuclear data files, it is timely to compare the results of integral tests with calculations using these different files. We discuss here our progress in making integral benchmark tests of the following nuclear data files: ENDL-94, ENDF/B-V and -VI, JENDL-3, JEF-2, and BROND-2. The methods used to process these evaluated libraries in a consistent way into applications files for use in Monte Carlo calculations is presented. Using these libraries, we are calculating and comparing to experiment k eff for 68 fast critical assemblies of 233,235 U and 239 Pu with reflectors of various material and thickness

  9. The impact and applicability of critical experiment evaluations

    Energy Technology Data Exchange (ETDEWEB)

    Brewer, R. [Los Alamos National Lab., NM (United States)

    1997-06-01

    This paper very briefly describes a project to evaluate previously performed critical experiments. The evaluation is intended for use by criticality safety engineers to verify calculations, and may also be used to identify data which need further investigation. The evaluation process is briefly outlined; the accepted benchmark critical experiments will be used as a standard for verification and validation. The end result of the project will be a comprehensive reference document.

  10. Experiments for IFR fuel criticality in ZPPR-21

    International Nuclear Information System (INIS)

    Olsen, D.N.; Collins, P.J.; Carpenter, S.G.

    1991-01-01

    A series of benchmark measurements was made in ZPPR-21 to validate criticality calculations for fuel operations in Argonne's Integral Fast Reactor. Six different mixtures of Pu/U/Zr fuel with a graphite reflector were built and criticality was determined by period measurements. The assemblies were isolated from room return problems by a lithium hydride shield. Analysis was done using a fully-detailed model with the VIM Monte Carlo code and ENDF/B-V.2 data. Sensitivity analysis was used to validate the measurements against other benchmark data. A simple RZ model was defined the used with the KENO code. Corrections to the RZ model were provided by the VIM calculations with low statistical uncertainty. 7 refs., 5 figs., 5 tabs

  11. Shielding benchmark problems

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Kawai, Masayoshi; Nakazawa, Masaharu.

    1978-09-01

    Shielding benchmark problems were prepared by the Working Group of Assessment of Shielding Experiments in the Research Comittee on Shielding Design of the Atomic Energy Society of Japan, and compiled by the Shielding Laboratory of Japan Atomic Energy Research Institute. Twenty-one kinds of shielding benchmark problems are presented for evaluating the calculational algorithm and the accuracy of computer codes based on the discrete ordinates method and the Monte Carlo method and for evaluating the nuclear data used in the codes. (author)

  12. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Xuhui [National Renewable Energy Laboratory (NREL), Golden, CO (United States). Transportation and Hydrogen Systems Center

    2017-10-19

    In FY16, the thermal performance of the 2014 Honda Accord Hybrid power electronics thermal management systems were benchmarked. Both experiments and numerical simulation were utilized to thoroughly study the thermal resistances and temperature distribution in the power module. Experimental results obtained from the water-ethylene glycol tests provided the junction-to-liquid thermal resistance. The finite element analysis (FEA) and computational fluid dynamics (CFD) models were found to yield a good match with experimental results. Both experimental and modeling results demonstrate that the passive stack is the dominant thermal resistance for both the motor and power electronics systems. The 2014 Accord power electronics systems yield steady-state thermal resistance values around 42- 50 mm to the 2nd power K/W, depending on the flow rates. At a typical flow rate of 10 liters per minute, the thermal resistance of the Accord system was found to be about 44 percent lower than that of the 2012 Nissan LEAF system that was benchmarked in FY15. The main reason for the difference is that the Accord power module used a metalized-ceramic substrate and eliminated the thermal interface material layers. FEA models were developed to study the transient performance of 2012 Nissan LEAF, 2014 Accord, and two other systems that feature conventional power module designs. The simulation results indicate that the 2012 LEAF power module has lowest thermal impedance at a time scale less than one second. This is probably due to moving low thermally conductive materials further away from the heat source and enhancing the heat spreading effect from the copper-molybdenum plate close to the insulated gate bipolar transistors. When approaching steady state, the Honda system shows lower thermal impedance. Measurement results of the thermal resistance of the 2015 BMW i3 power electronic system indicate that the i3 insulated gate bipolar transistor module has significantly lower junction

  13. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  14. Impact of MCNP unresolved resonance probability-table treatment on uranium and plutonium benchmarks

    International Nuclear Information System (INIS)

    Mosteller, R.D.; Little, R.C.

    1998-01-01

    Versions of MCNP up through and including 4B have not accurately modeled neutron self-shielding effects in the unresolved resonance energy region. Recently, a probability-table treatment has been incorporated into a developmental version of MCNP. This paper presents MCNP results for a variety of uranium and plutonium critical benchmarks, calculated with and without the probability-table treatment

  15. Benchmarking a first-principles thermal neutron scattering law for water ice with a diffusion experiment

    Directory of Open Access Journals (Sweden)

    Holmes Jesse

    2017-01-01

    Full Text Available The neutron scattering properties of water ice are of interest to the nuclear criticality safety community for the transport and storage of nuclear materials in cold environments. The common hexagonal phase ice Ih has locally ordered, but globally disordered, H2O molecular orientations. A 96-molecule supercell is modeled using the VASP ab initio density functional theory code and PHONON lattice dynamics code to calculate the phonon vibrational spectra of H and O in ice Ih. These spectra are supplied to the LEAPR module of the NJOY2012 nuclear data processing code to generate thermal neutron scattering laws for H and O in ice Ih in the incoherent approximation. The predicted vibrational spectra are optimized to be representative of the globally averaged ice Ih structure by comparing theoretically calculated and experimentally measured total cross sections and inelastic neutron scattering spectra. The resulting scattering kernel is then supplied to the MC21 Monte Carlo transport code to calculate time eigenvalues for the fundamental mode decay in ice cylinders at various temperatures. Results are compared to experimental flux decay measurements for a pulsed-neutron die-away diffusion benchmark.

  16. Medical school benchmarking - from tools to programmes.

    Science.gov (United States)

    Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T

    2015-02-01

    Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.

  17. Criticality safety validation of MCNP5 using continuous energy libraries

    International Nuclear Information System (INIS)

    Salome, Jean A.D.; Pereira, Claubia; Assuncao, Jonathan B.A.; Veloso, Maria Auxiliadora F.; Costa, Antonella L.; Silva, Clarysson A.M. da

    2013-01-01

    The study of subcritical systems is very important in the design, installation and operation of various devices, mainly nuclear reactors and power plants. The information generated by these systems guide the decisions to be taken in the executive project, the economic viability and the safety measures to be employed in a nuclear facility. Simulating some experiments from the International Handbook of Evaluated Criticality Safety Benchmark Experiments, the code MCNP5 was validated to nuclear criticality analysis. Its continuous libraries were used. The average values and standard deviation (SD) were evaluated. The results obtained with the code are very similar to the values obtained by the benchmark experiments. (author)

  18. Issues in Benchmark Metric Selection

    Science.gov (United States)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  19. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  20. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...... not public. The survey is a cooperative project "Benchmarking DanishIndustries" with CIP/Aalborg University, the Danish Technological University, the DanishTechnological Institute and Copenhagen Business School as consortia partners. The project has beenfunded by the Danish Agency for Trade and Industry...