WorldWideScience

Sample records for benchmark critical experiments

  1. Benchmarking criticality safety calculations with subcritical experiments

    International Nuclear Information System (INIS)

    Mihalczo, J.T.

    1984-06-01

    Calculation of the neutron multiplication factor at delayed criticality may be necessary for benchmarking calculations but it may not be sufficient. The use of subcritical experiments to benchmark criticality safety calculations could result in substantial savings in fuel material costs for experiments. In some cases subcritical configurations could be used to benchmark calculations where sufficient fuel to achieve delayed criticality is not available. By performing a variety of measurements with subcritical configurations, much detailed information can be obtained which can be compared directly with calculations. This paper discusses several measurements that can be performed with subcritical assemblies and presents examples that include comparisons between calculation and experiment where possible. Where not, examples from critical experiments have been used but the measurement methods could also be used for subcritical experiments

  2. International handbook of evaluated criticality safety benchmark experiments

    International Nuclear Information System (INIS)

    2010-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Organization for Economic Cooperation and Development - Nuclear Energy Agency (OECD-NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirement and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span over 55,000 pages and contain 516 evaluations with benchmark specifications for 4,405 critical, near critical, or subcritical configurations, 24 criticality alarm placement / shielding configurations with multiple dose points for each, and 200 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these evaluations; however, benchmark specifications are not derived for such experiments (in some cases models are provided in an appendix). Approximately 770 experimental configurations are categorized as unacceptable for use as criticality safety benchmark experiments. Additional evaluations are in progress and will be

  3. Handbook of critical experiments benchmarks

    International Nuclear Information System (INIS)

    Durst, B.M.; Bierman, S.R.; Clayton, E.D.

    1978-03-01

    Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input

  4. ICSBEP-2007, International Criticality Safety Benchmark Experiment Handbook

    International Nuclear Information System (INIS)

    Blair Briggs, J.

    2007-01-01

    1 - Description: The Critically Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United Sates Department of Energy. The project quickly became an international effort as scientist from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) is now an official activity of the Organization of Economic Cooperation and Development - Nuclear Energy Agency (OECD-NEA). This handbook contains criticality safety benchmark specifications that have been derived from experiments that were performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material. The example calculations presented do not constitute a validation of the codes or cross section data. The work of the ICSBEP is documented as an International Handbook of Evaluated Criticality Safety Benchmark Experiments. Currently, the handbook spans over 42,000 pages and contains 464 evaluations representing 4,092 critical, near-critical, or subcritical configurations and 21 criticality alarm placement/shielding configurations with multiple dose points for each and 46 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. The handbook is intended for use by criticality safety analysts to perform necessary validations of their calculational techniques and is expected to be a valuable tool for decades to come. The ICSBEP Handbook is available on DVD. You may request a DVD by completing the DVD Request Form on the internet. Access to the Handbook on the Internet requires a password. You may request a password by completing the Password Request Form. The Web address is: http://icsbep.inel.gov/handbook.shtml 2 - Method of solution: Experiments that are found

  5. Introduction to 'International Handbook of Criticality Safety Benchmark Experiments'

    International Nuclear Information System (INIS)

    Komuro, Yuichi

    1998-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) is now an official activity of the Organization for Economic Cooperation and Development-Nuclear Energy Agency (OECD-NEA). 'International Handbook of Criticality Safety Benchmark Experiments' was prepared and is updated year by year by the working group of the project. This handbook contains criticality safety benchmark specifications that have been derived from experiments that were performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used. The author briefly introduces the informative handbook and would like to encourage Japanese engineers who are in charge of nuclear criticality safety to use the handbook. (author)

  6. International Handbook of Evaluated Criticality Safety Benchmark Experiments - ICSBEP (DVD), Version 2013

    International Nuclear Information System (INIS)

    2013-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various nuclear critical experiment facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirement and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span nearly 66,000 pages and contain 558 evaluations with benchmark specifications for 4,798 critical, near critical or subcritical configurations, 24 criticality alarm placement/shielding configurations with multiple dose points for each and 200 configurations that have been categorised as fundamental physics measurements that are relevant to criticality safety applications. New to the Handbook are benchmark specifications for Critical, Bare, HEU(93.2)- Metal Sphere experiments referred to as ORSphere that were performed by a team of experimenters at Oak Ridge National Laboratory in the early 1970's. A photograph of this assembly is shown on the front cover

  7. Providing Nuclear Criticality Safety Analysis Education through Benchmark Experiment Evaluation

    International Nuclear Information System (INIS)

    Bess, John D.; Briggs, J. Blair; Nigg, David W.

    2009-01-01

    One of the challenges that today's new workforce of nuclear criticality safety engineers face is the opportunity to provide assessment of nuclear systems and establish safety guidelines without having received significant experience or hands-on training prior to graduation. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and/or the International Reactor Physics Experiment Evaluation Project (IRPhEP) provides students and young professionals the opportunity to gain experience and enhance critical engineering skills.

  8. Analysis of the impact of correlated benchmark experiments on the validation of codes for criticality safety analysis

    International Nuclear Information System (INIS)

    Bock, M.; Stuke, M.; Behler, M.

    2013-01-01

    The validation of a code for criticality safety analysis requires the recalculation of benchmark experiments. The selected benchmark experiments are chosen such that they have properties similar to the application case that has to be assessed. A common source of benchmark experiments is the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments' (ICSBEP Handbook) compiled by the 'International Criticality Safety Benchmark Evaluation Project' (ICSBEP). In order to take full advantage of the information provided by the individual benchmark descriptions for the application case, the recommended procedure is to perform an uncertainty analysis. The latter is based on the uncertainties of experimental results included in most of the benchmark descriptions. They can be performed by means of the Monte Carlo sampling technique. The consideration of uncertainties is also being introduced in the supplementary sheet of DIN 25478 'Application of computer codes in the assessment of criticality safety'. However, for a correct treatment of uncertainties taking into account the individual uncertainties of the benchmark experiments is insufficient. In addition, correlations between benchmark experiments have to be handled correctly. For example, these correlations can arise due to different cases of a benchmark experiment sharing the same components like fuel pins or fissile solutions. Thus, manufacturing tolerances of these components (e.g. diameter of the fuel pellets) have to be considered in a consistent manner in all cases of the benchmark experiment. At the 2012 meeting of the Expert Group on 'Uncertainty Analysis for Criticality Safety Assessment' (UACSA) of the OECD/NEA a benchmark proposal was outlined that aimed for the determination of the impact on benchmark correlations on the estimation of the computational bias of the neutron multiplication factor (k eff ). The analysis presented here is based on this proposal. (orig.)

  9. 2010 Criticality Accident Alarm System Benchmark Experiments At The CEA Valduc SILENE Facility

    International Nuclear Information System (INIS)

    Miller, Thomas Martin; Dunn, Michael E.; Wagner, John C.; McMahan, Kimberly L.; Authier, Nicolas; Jacquet, Xavier; Rousseau, Guillaume; Wolff, Herve; Piot, Jerome; Savanier, Laurence; Baclet, Nathalie; Lee, Yi-kang; Masse, Veronique; Trama, Jean-Christophe; Gagnier, Emmanuel; Naury, Sylvie; Lenain, Richard; Hunter, Richard; Kim, Soon; Dulik, George Michael; Reynolds, Kevin H.

    2011-01-01

    Several experiments were performed at the CEA Valduc SILENE reactor facility, which are intended to be published as evaluated benchmark experiments in the ICSBEP Handbook. These evaluated benchmarks will be useful for the verification and validation of radiation transport codes and evaluated nuclear data, particularly those that are used in the analysis of CAASs. During these experiments SILENE was operated in pulsed mode in order to be representative of a criticality accident, which is rare among shielding benchmarks. Measurements of the neutron flux were made with neutron activation foils and measurements of photon doses were made with TLDs. Also unique to these experiments was the presence of several detectors used in actual CAASs, which allowed for the observation of their behavior during an actual critical pulse. This paper presents the preliminary measurement data currently available from these experiments. Also presented are comparisons of preliminary computational results with Scale and TRIPOLI-4 to the preliminary measurement data.

  10. MCNP benchmark analyses of critical experiments for the Space Nuclear Thermal Propulsion program

    International Nuclear Information System (INIS)

    Selcow, E.C.; Cerbone, R.J.; Ludewig, H.; Mughabghab, S.F.; Schmidt, E.; Todosow, M.; Parma, E.J.; Ball, R.M.; Hoovler, G.S.

    1993-01-01

    Benchmark analyses have been performed of Particle Bed Reactor (PBR) critical experiments (CX) using the MCNP radiation transport code. The experiments have been conducted at the Sandia National Laboratory reactor facility in support of the Space Nuclear Thermal Propulsion (SNTP) program. The test reactor is a nineteen element water moderated and reflected thermal system. A series of integral experiments have been carried out to test the capabilities of the radiation transport codes to predict the performance of PBR systems. MCNP was selected as the preferred radiation analysis tool for the benchmark experiments. Comparison between experimental and calculational results indicate close agreement. This paper describes the analyses of benchmark experiments designed to quantify the accuracy of the MCNP radiation transport code for predicting the performance characteristics of PBR reactors

  11. MCNP benchmark analyses of critical experiments for the Space Nuclear Thermal Propulsion program

    Science.gov (United States)

    Selcow, Elizabeth C.; Cerbone, Ralph J.; Ludewig, Hans; Mughabghab, Said F.; Schmidt, Eldon; Todosow, Michael; Parma, Edward J.; Ball, Russell M.; Hoovler, Gary S.

    1993-01-01

    Benchmark analyses have been performed of Particle Bed Reactor (PBR) critical experiments (CX) using the MCNP radiation transport code. The experiments have been conducted at the Sandia National Laboratory reactor facility in support of the Space Nuclear Thermal Propulsion (SNTP) program. The test reactor is a nineteen element water moderated and reflected thermal system. A series of integral experiments have been carried out to test the capabilities of the radiation transport codes to predict the performance of PBR systems. MCNP was selected as the preferred radiation analysis tool for the benchmark experiments. Comparison between experimental and calculational results indicate close agreement. This paper describes the analyses of benchmark experiments designed to quantify the accuracy of the MCNP radiation transport code for predicting the performance characteristics of PBR reactors.

  12. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  13. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    International Nuclear Information System (INIS)

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-01-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR-06 are highlighted, and the future of the two projects is discussed

  14. Criticality safety benchmark experiment on 10% enriched uranyl nitrate solution using a 28-cm-thickness slab core

    International Nuclear Information System (INIS)

    Yamamoto, Toshihiro; Miyoshi, Yoshinori; Kikuchi, Tsukasa; Watanabe, Shouichi

    2002-01-01

    The second series of critical experiments with 10% enriched uranyl nitrate solution using 28-cm-thick slab core have been performed with the Static Experiment Critical Facility of the Japan Atomic Energy Research Institute. Systematic critical data were obtained by changing the uranium concentration of the fuel solution from 464 to 300 gU/l under various reflector conditions. In this paper, the thirteen critical configurations for water-reflected cores and unreflected cores are identified and evaluated. The effects of uncertainties in the experimental data on k eff are quantified by sensitivity studies. Benchmark model specifications that are necessary to construct a calculational model are given. The uncertainties of k eff 's included in the benchmark model specifications are approximately 0.1%Δk eff . The thirteen critical configurations are judged to be acceptable benchmark data. Using the benchmark model specifications, sample calculation results are provided with several sets of standard codes and cross section data. (author)

  15. International Criticality Safety Benchmark Evaluation Project (ICSBEP) - ICSBEP 2015 Handbook

    International Nuclear Information System (INIS)

    Bess, John D.

    2015-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy (DOE). The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Nuclear Energy Agency (NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirements and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross-section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span approximately 69000 pages and contain 567 evaluations with benchmark specifications for 4874 critical, near-critical or subcritical configurations, 31 criticality alarm placement/shielding configurations with multiple dose points for each, and 207 configurations that have been categorised as fundamental physics measurements that are relevant to criticality safety applications. New to the handbook are benchmark specifications for neutron activation foil and thermoluminescent dosimeter measurements performed at the SILENE critical assembly in Valduc, France as part of a joint venture in 2010 between the US DOE and the French Alternative Energies and Atomic Energy Commission (CEA). A photograph of this experiment is shown on the front cover. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these

  16. Validation of VHTRC calculation benchmark of critical experiment using the MCB code

    Directory of Open Access Journals (Sweden)

    Stanisz Przemysław

    2016-01-01

    Full Text Available The calculation benchmark problem Very High Temperature Reactor Critical (VHTR a pin-in-block type core critical assembly has been investigated with the Monte Carlo Burnup (MCB code in order to validate the latest version of Nuclear Data Library based on ENDF format. Executed benchmark has been made on the basis of VHTR benchmark available from the International Handbook of Evaluated Reactor Physics Benchmark Experiments. This benchmark is useful for verifying the discrepancies in keff values between various libraries and experimental values. This allows to improve accuracy of the neutron transport calculations that may help in designing the high performance commercial VHTRs. Almost all safety parameters depend on the accuracy of neutron transport calculation results that, in turn depend on the accuracy of nuclear data libraries. Thus, evaluation of the libraries applicability to VHTR modelling is one of the important subjects. We compared the numerical experiment results with experimental measurements using two versions of available nuclear data (ENDF-B-VII.1 and JEFF-3.2 prepared for required temperatures. Calculations have been performed with the MCB code which allows to obtain very precise representation of complex VHTR geometry, including the double heterogeneity of a fuel element. In this paper, together with impact of nuclear data, we discuss also the impact of different lattice modelling inside the fuel pins. The discrepancies of keff have been successfully observed and show good agreement with each other and with the experimental data within the 1 σ range of the experimental uncertainty. Because some propagated discrepancies observed, we proposed appropriate corrections in experimental constants which can improve the reactivity coefficient dependency. Obtained results confirm the accuracy of the new Nuclear Data Libraries.

  17. Benchmark criticality experiments for fast fission configuration with high enriched nuclear fuel

    International Nuclear Information System (INIS)

    Sikorin, S.N.; Mandzik, S.G.; Polazau, S.A.; Hryharovich, T.K.; Damarad, Y.V.; Palahina, Y.A.

    2014-01-01

    Benchmark criticality experiments of fast heterogeneous configuration with high enriched uranium (HEU) nuclear fuel were performed using the 'Giacint' critical assembly of the Joint Institute for Power and Nuclear Research - Sosny (JIPNR-Sosny) of the National Academy of Sciences of Belarus. The critical assembly core comprised fuel assemblies without a casing for the 34.8 mm wrench. Fuel assemblies contain 19 fuel rods of two types. The first type is metal uranium fuel rods with 90% enrichment by U-235; the second one is dioxide uranium fuel rods with 36% enrichment by U-235. The total fuel rods length is 620 mm, and the active fuel length is 500 mm. The outer fuel rods diameter is 7 mm, the wall is 0.2 mm thick, and the fuel material diameter is 6.4 mm. The clad material is stainless steel. The side radial reflector: the inner layer of beryllium, and the outer layer of stainless steel. The top and bottom axial reflectors are of stainless steel. The analysis of the experimental results obtained from these benchmark experiments by developing detailed calculation models and performing simulations for the different experiments is presented. The sensitivity of the obtained results for the material specifications and the modeling details were examined. The analyses used the MCNP and MCU computer programs. This paper presents the experimental and analytical results. (authors)

  18. The International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    International Nuclear Information System (INIS)

    Briggs, J.B.

    2003-01-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organisation for Economic Cooperation and Development (OECD) - Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Yugoslavia, Kazakhstan, Israel, Spain, and Brazil are now participating. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled 'International Handbook of Evaluated Criticality Safety Benchmark Experiments.' The 2003 Edition of the Handbook contains benchmark model specifications for 3070 critical or subcritical configurations that are intended for validating computer codes that calculate effective neutron multiplication and for testing basic nuclear data. (author)

  19. RECENT ADDITIONS OF CRITICALITY SAFETY RELATED INTEGRAL BENCHMARK DATA TO THE ICSBEP AND IRPHEP HANDBOOKS

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Sartori

    2009-09-01

    High-quality integral benchmark experiments have always been a priority for criticality safety. However, interest in integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of future criticality safety needs to support next generation reactor and advanced fuel cycle concepts. The importance of drawing upon existing benchmark data is becoming more apparent because of dwindling availability of critical facilities worldwide and the high cost of performing new experiments. Integral benchmark data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the International Handbook of Reactor Physics Benchmark Experiments are widely used. Benchmark data have been added to these two handbooks since the last Nuclear Criticality Safety Division Topical Meeting in Knoxville, Tennessee (September 2005). This paper highlights these additions.

  20. Recent additions of criticality safety related integral benchmark data to the ICSBEP and IRPHEP handbooks

    International Nuclear Information System (INIS)

    Briggs, J. B.; Scott, L.; Rugama, Y.; Sartori, E.

    2009-01-01

    High-quality integral benchmark experiments have always been a priority for criticality safety. However, interest in integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of future criticality safety needs to support next generation reactor and advanced fuel cycle concepts. The importance of drawing upon existing benchmark data is becoming more apparent because of dwindling availability of critical facilities worldwide and the high cost of performing new experiments. Integral benchmark data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the International Handbook of Reactor Physics Benchmark Experiments are widely used. Benchmark data have been added to these two handbooks since the last Nuclear Criticality Safety Division Topical Meeting in Knoxville, Tennessee (September 2005). This paper highlights these additions. (authors)

  1. REcent Additions Of Criticality Safety Related Integral Benchmark Data To The Icsbep And Irphep Handbooks

    International Nuclear Information System (INIS)

    Briggs, J. Blair; Scott, Lori; Rugama, Yolanda; Sartori, Enrico

    2009-01-01

    High-quality integral benchmark experiments have always been a priority for criticality safety. However, interest in integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of future criticality safety needs to support next generation reactor and advanced fuel cycle concepts. The importance of drawing upon existing benchmark data is becoming more apparent because of dwindling availability of critical facilities worldwide and the high cost of performing new experiments. Integral benchmark data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the International Handbook of Reactor Physics Benchmark Experiments are widely used. Benchmark data have been added to these two handbooks since the last Nuclear Criticality Safety Division Topical Meeting in Knoxville, Tennessee (September 2005). This paper highlights these additions.

  2. The ORSphere Benchmark Evaluation and Its Potential Impact on Nuclear Criticality Safety

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess; Margaret A. Marshall; J. Blair Briggs

    2013-10-01

    In the early 1970’s, critical experiments using an unreflected metal sphere of highly enriched uranium (HEU) were performed with the focus to provide a “very accurate description…as an ideal benchmark for calculational methods and cross-section data files.” Two near-critical configurations of the Oak Ridge Sphere (ORSphere) were evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook). The results from those benchmark experiments were then compared with additional unmoderated and unreflected HEU metal benchmark experiment configurations currently found in the ICSBEP Handbook. For basic geometries (spheres, cylinders, and slabs) the eigenvalues calculated using MCNP5 and ENDF/B-VII.0 were within 3 of their respective benchmark values. There appears to be generally a good agreement between calculated and benchmark values for spherical and slab geometry systems. Cylindrical geometry configurations tended to calculate low, including more complex bare HEU metal systems containing cylinders. The ORSphere experiments do not calculate within their 1s uncertainty and there is a possibility that the effect of the measured uncertainties for the GODIVA I benchmark may need reevaluated. There is significant scatter in the calculations for the highly-correlated ORCEF cylinder experiments, which are constructed from close-fitting HEU discs and annuli. Selection of a nuclear data library can have a larger impact on calculated eigenvalue results than the variation found within calculations of a given experimental series, such as the ORCEF cylinders, using a single nuclear data set.

  3. The International Criticality Safety Benchmark Evaluation Project

    International Nuclear Information System (INIS)

    Briggs, B. J.; Dean, V. F.; Pesic, M. P.

    2001-01-01

    experimenters or individuals who are familiar with the experimenters or the experimental facility; (3) compile the data into a standardized format; (4) perform calculations of each experiment with standard criticality safety codes, and (5) formally document the work into a single source of verified benchmark critical data. The work of the ICSBEP is documented as an OECD handbook, in 7 volumes, entitled, 'International Handbook of Evaluated Criticality Safety Benchmark Experiments'. This handbook is available on CD-ROM or on the Internet (http://icsbep.inel.gov/icsbep). Over 150 scientists from around the world have combined their efforts to produce this Handbook. The 2000 publication of the handbook will span over 19,000 pages and contain benchmark specifications for approximately 284 evaluations containing 2352 critical configurations. The handbook is currently in use in 45 different countries by criticality safety analysts to perform necessary validation of their calculation techniques and it is expected to be a valuable tool for decades to come. As a result of the efforts of the ICSBEP: (1) a large portion of the tedious, redundant, and very costly research and processing of criticality safety experimental data has been eliminated; (2) the necessary step in criticality safety analyses of validating computer codes with benchmark data is greatly streamlined; (3) gaps in data are being highlighted; (4) lost data are being retrieved; (5) deficiencies and errors in cross section processing codes and neutronic codes are being identified, and (6) over a half-century of valuable criticality safety data are being preserved. (author)

  4. Criticality Benchmark Analysis of Water-Reflected Uranium Oxyfluoride Slabs

    International Nuclear Information System (INIS)

    Marshall, Margaret A.; Bess, John D.

    2009-01-01

    A series of twelve experiments were conducted in the mid 1950's at the Oak Ridge National Laboratory Critical Experiments Facility to determine the critical conditions of a semi-infinite water-reflected slab of aqueous uranium oxyfluoride (UO2F2). A different slab thickness was used for each experiment. Results from the twelve experiment recorded in the laboratory notebook were published in Reference 1. Seven of the twelve experiments were determined to be acceptable benchmark experiments for the inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. This evaluation will not only be available to handbook users for the validation of computer codes and integral cross-section data, but also for the reevaluation of experimental data used in the ANSI/ANS-8.1 standard. This evaluation is important as part of the technical basis of the subcritical slab limits in ANSI/ANS-8.1. The original publication of the experimental results was used for the determination of bias and bias uncertainties for subcritical slab limits, as documented by Hugh Clark's paper 'Subcritical Limits for Uranium-235 Systems'.

  5. Criticality safety benchmark evaluation project: Recovering the past

    Energy Technology Data Exchange (ETDEWEB)

    Trumble, E.F.

    1997-06-01

    A very brief summary of the Criticality Safety Benchmark Evaluation Project of the Westinghouse Savannah River Company is provided in this paper. The purpose of the project is to provide a source of evaluated criticality safety experiments in an easily usable format. Another project goal is to search for any experiments that may have been lost or contain discrepancies, and to determine if they can be used. Results of evaluated experiments are being published as US DOE handbooks.

  6. Monte Carlo code criticality benchmark comparisons for waste packaging

    International Nuclear Information System (INIS)

    Alesso, H.P.; Annese, C.E.; Buck, R.M.; Pearson, J.S.; Lloyd, W.R.

    1992-07-01

    COG is a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The objective of this paper is to report on COG results for criticality benchmark experiments both on a Cray mainframe and on a HP 9000 workstation. COG has been recently ported to workstations to improve its accessibility to a wider community of users. COG has some similarities to a number of other computer codes used in the shielding and criticality community. The recently introduced high performance reduced instruction set (RISC) UNIX workstations provide computational power that approach mainframes at a fraction of the cost. A version of COG is currently being developed for the Hewlett Packard 9000/730 computer with a UNIX operating system. Subsequent porting operations will move COG to SUN, DEC, and IBM workstations. In addition, a CAD system for preparation of the geometry input for COG is being developed. In July 1977, Babcock ampersand Wilcox Co. (B ampersand W) was awarded a contract to conduct a series of critical experiments that simulated close-packed storage of LWR-type fuel. These experiments provided data for benchmarking and validating calculational methods used in predicting K-effective of nuclear fuel storage in close-packed, neutron poisoned arrays. Low enriched UO2 fuel pins in water-moderated lattices in fuel storage represent a challenging criticality calculation for Monte Carlo codes particularly when the fuel pins extend out of the water. COG and KENO calculational results of these criticality benchmark experiments are presented

  7. Evaluation of Saxton critical experiments

    Energy Technology Data Exchange (ETDEWEB)

    Joo, Hyung Kook; Noh, Jae Man; Jung, Hyung Guk; Kim, Young Il; Kim, Young Jin [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    As a part of International Criticality Safety Benchmark Evaluation Project (ICSBEP), SAXTON critical experiments were reevaluated. The effects of k{sub eff} of the uncertainties in experiment parameters, fuel rod characterization, soluble boron, critical water level, core structure, {sup 241}Am and {sup 241}Pu isotope number densities, random pitch error, duplicated experiment, axial fuel position, model simplification, etc., were evaluated and added in benchmark-model k{sub eff}. In addition to detailed model, the simplified model for Saxton critical experiments was constructed by omitting the top, middle, and bottom grids and ignoring the fuel above water. 6 refs., 1 fig., 3 tabs. (Author)

  8. Evaluation of Saxton critical experiments

    Energy Technology Data Exchange (ETDEWEB)

    Joo, Hyung Kook; Noh, Jae Man; Jung, Hyung Guk; Kim, Young Il; Kim, Young Jin [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    As a part of International Criticality Safety Benchmark Evaluation Project (ICSBEP), SAXTON critical experiments were reevaluated. The effects of k{sub eff} of the uncertainties in experiment parameters, fuel rod characterization, soluble boron, critical water level, core structure, {sup 241}Am and {sup 241}Pu isotope number densities, random pitch error, duplicated experiment, axial fuel position, model simplification, etc., were evaluated and added in benchmark-model k{sub eff}. In addition to detailed model, the simplified model for Saxton critical experiments was constructed by omitting the top, middle, and bottom grids and ignoring the fuel above water. 6 refs., 1 fig., 3 tabs. (Author)

  9. Criticality experiments to provide benchmark data on neutron flux traps

    International Nuclear Information System (INIS)

    Bierman, S.R.

    1988-06-01

    The experimental measurements covered by this report were designed to provide benchmark type data on water moderated LWR type fuel arrays containing neutron flux traps. The experiments were performed at the US Department of Energy Hanford Critical Mass Laboratory, operated by Pacific Northwest Laboratory. The experimental assemblies consisted of 2 /times/ 2 arrays of 4.31 wt % 235 U enriched UO 2 fuel rods, uniformly arranged in water on a 1.891 cm square center-to-center spacing. Neutron flux traps were created between the fuel units using metal plates containing varying amounts of boron. Measurements were made to determine the effect that boron loading and distance between the fuel and flux trap had on the amount of fuel required for criticality. Also, measurements were made, using the pulse neutron source technique, to determine the effect of boron loading on the effective neutron multiplications constant. On two assemblies, reaction rate measurements were made using solid state track recorders to determine absolute fission rates in 235 U and 238 U. 14 refs., 12 figs., 7 tabs

  10. Growth and Expansion of the International Criticality Safety Benchmark Evaluation Project and the Newly Organized International Reactor Physics Experiment Evaluation Project

    International Nuclear Information System (INIS)

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-01-01

    Since ICNC 2003, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) has continued to expand its efforts and broaden its scope. Criticality-alarm/shielding type benchmarks and fundamental physics measurements that are relevant to criticality safety applications are not only included in the scope of the project, but benchmark data are also included in the latest version of the handbook. A considerable number of improvements have been made to the searchable database, DICE and the criticality-alarm/shielding benchmarks and fundamental physics measurements have been included in the database. There were 12 countries participating on the ICSBEP in 2003. That number has increased to 18 with recent contributions of data and/or resources from Brazil, Czech Republic, Poland, India, Canada, and China. South Africa, Germany, Argentina, and Australia have been invited to participate. Since ICNC 2003, the contents of the ''International Handbook of Evaluated Criticality Safety Benchmark Experiments'' have increased from 350 evaluations (28,000 pages) containing benchmark specifications for 3070 critical or subcritical configurations to 442 evaluations (over 38,000 pages) containing benchmark specifications for 3957 critical or subcritical configurations, 23 criticality-alarm-placement/shielding configurations with multiple dose points for each, and 20 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications in the 2006 Edition of the ICSBEP Handbook. Approximately 30 new evaluations and 250 additional configurations are expected to be added to the 2007 Edition of the Handbook. Since ICNC 2003, a reactor physics counterpart to the ICSBEP, The International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. Beginning in 1999, the IRPhEP was conducted as a pilot activity by the by the Organization of Economic Cooperation and Development (OECD) Nuclear Energy Agency

  11. Effects of neutron data libraries and criticality codes on IAEA criticality benchmark problems

    International Nuclear Information System (INIS)

    Sarker, Md.M.; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-10-01

    In order to compare the effects of neutron data libraries and criticality codes to thermal reactors (LWR), the IAEA criticality benchmark calculations have been performed. The experiments selected in this study include TRX-1 and TRX-2 with a simple geometric configuration. Reactor lattice calculation codes WIMS-D/4, MCNP-4, JACS (MGCL, KENO), and SRAC were used in the present calculations. The TRX cores were analyzed by WIMS-D/4 using WIMS original library and also by MCNP-4, JACS (MGCL, KENO), and SRAC using the libraries generated from JENDL-3 and ENDF/B-IV nuclear data files. An intercomparison work for the above mentioned code systems and cross section libraries was performed by analyzing the LWR benchmark experiments TRX-1 and TRX-2. The TRX cores were also analyzed for supercritical and subcritical conditions and these results were compared. In the case of critical condition, the results were in good agreement. But for the supercritical and subcritical conditions, the difference of the results obtained by using the different cross section libraries become larger than for the critical condition. (author)

  12. Preparation of a criticality benchmark based on experiments performed at the RA-6 reactor

    International Nuclear Information System (INIS)

    Bazzana, S.; Blaumann, H; Marquez Damian, J.I

    2009-01-01

    The operation and fuel management of a reactor uses neutronic modeling to predict its behavior in operational and accidental conditions. This modeling uses computational tools and nuclear data that must be contrasted against benchmark experiments to ensure its accuracy. These benchmarks have to be simple enough to be possible to model with the desired computer code and have quantified and bound uncertainties. The start-up of the RA-6 reactor, final stage of the conversion and renewal project, allowed us to obtain experimental results with fresh fuel. In this condition the material composition of the fuel elements is precisely known, which contributes to a more precise modeling of the critical condition. These experimental results are useful to evaluate the precision of the models used to design the core, based on U 3 Si 2 and cadmium wires as burnable poisons, for which no data was previously available. The analysis of this information can be used to validate models for the analysis of similar configurations, which is necessary to follow the operational history of the reactor and perform fuel management. The analysis of the results and the generation of the model were done following the methodology established by International Criticality Safety Benchmark Evaluation Project, which gathers and analyzes experimental data for critical systems. The results were very satisfactory resulting on a value for the multiplication factor of the model of 1.0000 ± 0.0044, and a calculated value of 0.9980 ± 0.0001 using MCNP 5 and ENDF/B-VI. The utilization of as-built dimensions and compositions, and the sensitivity analysis allowed us to review the design calculations and analyze their precision, accuracy and error compensation. [es

  13. Criticality benchmark comparisons leading to cross-section upgrades

    International Nuclear Information System (INIS)

    Alesso, H.P.; Annese, C.E.; Heinrichs, D.P.; Lloyd, W.R.; Lent, E.M.

    1993-01-01

    For several years criticality benchmark calculations with COG. COG is a point-wise Monte Carlo code developed at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The principle consideration in developing COG was that the resulting calculation would be as accurate as the point-wise cross-sectional data, since no physics computational approximations were used. The objective of this paper is to report on COG results for criticality benchmark experiments in concert with MCNP comparisons which are resulting in corrections an upgrades to the point-wise ENDL cross-section data libraries. Benchmarking discrepancies reported here indicated difficulties in the Evaluated Nuclear Data Livermore (ENDL) cross-sections for U-238 at thermal neutron energy levels. This led to a re-evaluation and selection of the appropriate cross-section values from several cross-section sets available (ENDL, ENDF/B-V). Further cross-section upgrades anticipated

  14. Verification of HELIOS-MASTER system through benchmark of critical experiments

    International Nuclear Information System (INIS)

    Kim, H. Y.; Kim, K. Y.; Cho, B. O.; Lee, C. C.; Zee, S. O.

    1999-01-01

    The HELIOS-MASTER code system is verified through the benchmark of the critical experiments that were performed by RRC 'Kurchatov Institute' with water-moderated hexagonally pitched lattices of highly enriched Uranium fuel rods (80w/o). We also used the same input by using the MCNP code that was described in the evaluation report, and compared our results with those of the evaluation report. HELIOS, developed by Scandpower A/S, is a two-dimensional transport program for the generation of group cross-sections, and MASTER, developed by KAERI, is a three-dimensional nuclear design and analysis code based on the two-group diffusion theory. It solves neutronics model with the AFEN (Analytic Function Expansion Nodal) method for hexagonal geometry. The results show that the HELIOS-MASTER code system is fast and accurate enough to be used as nuclear core analysis tool for hexagonal geometry

  15. Criticality benchmarks for COG: A new point-wise Monte Carlo code

    International Nuclear Information System (INIS)

    Alesso, H.P.; Pearson, J.; Choi, J.S.

    1989-01-01

    COG is a new point-wise Monte Carlo code being developed and tested at LLNL for the Cray computer. It solves the Boltzmann equation for the transport of neutrons, photons, and (in future versions) charged particles. Techniques included in the code for modifying the random walk of particles make COG most suitable for solving deep-penetration (shielding) problems. However, its point-wise cross-sections also make it effective for a wide variety of criticality problems. COG has some similarities to a number of other computer codes used in the shielding and criticality community. These include the Lawrence Livermore National Laboratory (LLNL) codes TART and ALICE, the Los Alamos National Laboratory code MCNP, the Oak Ridge National Laboratory codes 05R, 06R, KENO, and MORSE, the SACLAY code TRIPOLI, and the MAGI code SAM. Each code is a little different in its geometry input and its random-walk modification options. Validating COG consists in part of running benchmark calculations against critical experiments as well as other codes. The objective of this paper is to present calculational results of a variety of critical benchmark experiments using COG, and to present the resulting code bias. Numerous benchmark calculations have been completed for a wide variety of critical experiments which generally involve both simple and complex physical problems. The COG results, which they report in this paper, have been excellent

  16. Calculational study of benchmark critical experiments on high-enriched uranyl nitrate solution systems

    International Nuclear Information System (INIS)

    Oh, I.; Rothe, R.E.

    1978-01-01

    Criticality calculations on minimally reflected, concrete-reflected, and plastic-reflected single tanks and on arrays of cylinders reflected by concrete and plastic have been performed using the KENO-IV code with 16-group Hansen-Roach neutron cross sections. The fissile material was high-enriched (93.17% 235 U) uranyl nitrate [UO 2 (NO 3 ) 2 ] solution. Calculated results are compared with those from a benchmark critical experiments program to provide the best possible verification of the calculational technique. The calculated k/sub eff/'s underestimate the critical condition by an average of 1.28% for the minimally reflected single tanks, 1.09% for the concrete-reflected single tanks, 0.60% for the plastic-reflected single tanks, 0.75% for the concrete-reflected arrays of cylinders, and 0.51% for the plastic-reflected arrays of cylinders. More than half of the present comparisons were within 1% of the experimental values, and the worst calculational and experimental discrepancy was 2.3% in k/sub eff/ for the KENO calculations

  17. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kahler, A. [Los Alamos National Laboratory (LANL); Macfarlane, R E [Los Alamos National Laboratory (LANL); Mosteller, R D [Los Alamos National Laboratory (LANL); Kiedrowski, B C [Los Alamos National Laboratory (LANL); Frankle, S C [Los Alamos National Laboratory (LANL); Chadwick, M. B. [Los Alamos National Laboratory (LANL); Mcknight, R D [Argonne National Laboratory (ANL); Lell, R M [Argonne National Laboratory (ANL); Palmiotti, G [Idaho National Laboratory (INL); Hiruta, h [Idaho National Laboratory (INL); Herman, Micheal W [Brookhaven National Laboratory (BNL); Arcilla, r [Brookhaven National Laboratory (BNL); Mughabghab, S F [Brookhaven National Laboratory (BNL); Sublet, J C [Culham Science Center, Abington, UK; Trkov, A. [Jozef Stefan Institute, Slovenia; Trumbull, T H [Knolls Atomic Power Laboratory; Dunn, Michael E [ORNL

    2011-01-01

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [1]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unrnoderated and uranium reflected (235)U and (239)Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as (236)U; (238,242)Pu and (241,243)Am capture in fast systems. Other deficiencies, such as the overprediction of Pu solution system critical

  18. Critical power prediction by CATHARE2 of the OECD/NRC BFBT benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Lutsanych, Sergii, E-mail: s.lutsanych@ing.unipi.it [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, 56122, San Piero a Grado, Pisa (Italy); Sabotinov, Luben, E-mail: luben.sabotinov@irsn.fr [Institut for Radiological Protection and Nuclear Safety (IRSN), 31 avenue de la Division Leclerc, 92262 Fontenay-aux-Roses (France); D’Auria, Francesco, E-mail: francesco.dauria@dimnp.unipi.it [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, 56122, San Piero a Grado, Pisa (Italy)

    2015-03-15

    Highlights: • We used CATHARE code to calculate the critical power exercises of the OECD/NRC BFBT benchmark. • We considered both steady-state and transient critical power tests of the benchmark. • We used both the 1D and 3D features of the CATHARE code to simulate the experiments. • Acceptable prediction of the critical power and its location in the bundle is obtained using appropriate modelling. - Abstract: This paper presents an application of the French best estimate thermal-hydraulic code CATHARE 2 to calculate the critical power and departure from nucleate boiling (DNB) exercises of the International OECD/NRC BWR Fuel Bundle Test (BFBT) benchmark. The assessment activity is performed comparing the code calculation results with available in the framework of the benchmark experimental data from Japanese Nuclear Power Engineering Corporation (NUPEC). Two-phase flow calculations on prediction of the critical power have been carried out both in steady state and transient cases, using one-dimensional and three-dimensional modelling. Results of the steady-state critical power tests calculation have shown the ability of CATHARE code to predict reasonably the critical power and its location, using appropriate modelling.

  19. Growth and Expansion of the International Criticality Safety Benchmark Evaluation Project and the Newly Organized International Reactor Physics Experiment Evaluation Project

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-05-01

    Since ICNC 2003, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) has continued to expand its efforts and broaden its scope. Criticality-alarm / shielding type benchmarks and fundamental physics measurements that are relevant to criticality safety applications are not only included in the scope of the project, but benchmark data are also included in the latest version of the handbook. A considerable number of improvements have been made to the searchable database, DICE and the criticality-alarm / shielding benchmarks and fundamental physics measurements have been included in the database. There were 12 countries participating on the ICSBEP in 2003. That number has increased to 18 with recent contributions of data and/or resources from Brazil, Czech Republic, Poland, India, Canada, and China. South Africa, Germany, Argentina, and Australia have been invited to participate. Since ICNC 2003, the contents of the “International Handbook of Evaluated Criticality Safety Benchmark Experiments” have increased from 350 evaluations (28,000 pages) containing benchmark specifications for 3070 critical or subcritical configurations to 442 evaluations (over 38,000 pages) containing benchmark specifications for 3957 critical or subcritical configurations, 23 criticality-alarm-placement / shielding configurations with multiple dose points for each, and 20 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications in the 2006 Edition of the ICSBEP Handbook. Approximately 30 new evaluations and 250 additional configurations are expected to be added to the 2007 Edition of the Handbook. Since ICNC 2003, a reactor physics counterpart to the ICSBEP, The International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. Beginning in 1999, the IRPhEP was conducted as a pilot activity by the by the Organization of Economic Cooperation and Development (OECD) Nuclear Energy

  20. Benchmarking of HEU mental annuli critical assemblies with internally reflected graphite cylinder

    Directory of Open Access Journals (Sweden)

    Xiaobo Liu

    2017-01-01

    Full Text Available Three experimental configurations of critical assemblies, performed in 1963 at the Oak Ridge Critical Experiment Facility, which are assembled using three different diameter HEU annuli (15-9 inches, 15-7 inches and 13-7 inches metal annuli with internally reflected graphite cylinder are evaluated and benchmarked. The experimental uncertainties which are 0.00057, 0.00058 and 0.00057 respectively, and biases to the benchmark models which are − 0.00286, − 0.00242 and − 0.00168 respectively, were determined, and the experimental benchmark keff results were obtained for both detailed and simplified models. The calculation results for both detailed and simplified models using MCNP6-1.0 and ENDF/B-VII.1 agree well to the benchmark experimental results within difference less than 0.2%. The benchmarking results were accepted for the inclusion of ICSBEP Handbook.

  1. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kahler, A.C.; Herman, M.; Kahler,A.C.; MacFarlane,R.E.; Mosteller,R.D.; Kiedrowski,B.C.; Frankle,S.C.; Chadwick,M.B.; McKnight,R.D.; Lell,R.M.; Palmiotti,G.; Hiruta,H.; Herman,M.; Arcilla,R.; Mughabghab,S.F.; Sublet,J.C.; Trkov,A.; Trumbull,T.H.; Dunn,M.

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., 'ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data,' Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected {sup 235}U and {sup 239}Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also

  2. Creation of a simplified benchmark model for the neptunium sphere experiment

    International Nuclear Information System (INIS)

    Mosteller, Russell D.; Loaiza, David J.; Sanchez, Rene G.

    2004-01-01

    Although neptunium is produced in significant amounts by nuclear power reactors, its critical mass is not well known. In addition, sizeable uncertainties exist for its cross sections. As an important step toward resolution of these issues, a critical experiment was conducted in 2002 at the Los Alamos Critical Experiments Facility. In the experiment, a 6-kg sphere of 237 Np was surrounded by nested hemispherical shells of highly enriched uranium. The shells were required in order to reach a critical condition. Subsequently, a detailed model of the experiment was developed. This model faithfully reproduces the components of the experiment, but it is geometrically complex. Furthermore, the isotopics analysis upon which that model is based omits nearly 1 % of the mass of the sphere. A simplified benchmark model has been constructed that retains all of the neutronically important aspects of the detailed model and substantially reduces the computer resources required for the calculation. The reactivity impact, of each of the simplifications is quantified, including the effect of the missing mass. A complete set of specifications for the benchmark is included in the full paper. Both the detailed and simplified benchmark models underpredict k eff by more than 1% Δk. This discrepancy supports the suspicion that better cross sections are needed for 237 Np.

  3. Computer simulation of Masurca critical and subcritical experiments. Muse-4 benchmark. Final report

    International Nuclear Information System (INIS)

    2006-01-01

    The efficient and safe management of spent fuel produced during the operation of commercial nuclear power plants is an important issue. In this context, partitioning and transmutation (P and T) of minor actinides and long-lived fission products can play an important role, significantly reducing the burden on geological repositories of nuclear waste and allowing their more effective use. Various systems, including existing reactors, fast reactors and advanced systems have been considered to optimise the transmutation scheme. Recently, many countries have shown interest in accelerator-driven systems (ADS) due to their potential for transmutation of minor actinides. Much R and D work is still required in order to demonstrate their desired capability as a whole system, and the current analysis methods and nuclear data for minor actinide burners are not as well established as those for conventionally-fuelled systems. Recognizing a need for code and data validation in this area, the Nuclear Science Committee of the OECD/NEA has organised various theoretical benchmarks on ADS burners. Many improvements and clarifications concerning nuclear data and calculation methods have been achieved. However, some significant discrepancies for important parameters are not fully understood and still require clarification. Therefore, this international benchmark based on MASURCA experiments, which were carried out under the auspices of the EC 5. Framework Programme, was launched in December 2001 in co-operation with the CEA (France) and CIEMAT (Spain). The benchmark model was oriented to compare simulation predictions based on available codes and nuclear data libraries with experimental data related to TRU transmutation, criticality constants and time evolution of the neutronic flux following source variation, within liquid metal fast subcritical systems. A total of 16 different institutions participated in this first experiment based benchmark, providing 34 solutions. The large number

  4. The International Criticality Safety Benchmark Evaluation Project on the Internet

    International Nuclear Information System (INIS)

    Briggs, J.B.; Brennan, S.A.; Scott, L.

    2000-01-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in October 1992 by the US Department of Energy's (DOE's) defense programs and is documented in the Transactions of numerous American Nuclear Society and International Criticality Safety Conferences. The work of the ICSBEP is documented as an Organization for Economic Cooperation and Development (OECD) handbook, International Handbook of Evaluated Criticality Safety Benchmark Experiments. The ICSBEP Internet site was established in 1996 and its address is http://icsbep.inel.gov/icsbep. A copy of the ICSBEP home page is shown in Fig. 1. The ICSBEP Internet site contains the five primary links. Internal sublinks to other relevant sites are also provided within the ICSBEP Internet site. A brief description of each of the five primary ICSBEP Internet site links is given

  5. Evaluation of the concrete shield compositions from the 2010 criticality accident alarm system benchmark experiments at the CEA Valduc SILENE facility

    International Nuclear Information System (INIS)

    Miller, Thomas Martin; Celik, Cihangir; Dunn, Michael E; Wagner, John C; McMahan, Kimberly L; Authier, Nicolas; Jacquet, Xavier; Rousseau, Guillaume; Wolff, Herve; Savanier, Laurence; Baclet, Nathalie; Lee, Yi-kang; Trama, Jean-Christophe; Masse, Veronique; Gagnier, Emmanuel; Naury, Sylvie; Blanc-Tranchant, Patrick; Hunter, Richard; Kim, Soon; Dulik, George Michael; Reynolds, Kevin H.

    2015-01-01

    In October 2010, a series of benchmark experiments were conducted at the French Commissariat a l'Energie Atomique et aux Energies Alternatives (CEA) Valduc SILENE facility. These experiments were a joint effort between the United States Department of Energy Nuclear Criticality Safety Program and the CEA. The purpose of these experiments was to create three benchmarks for the verification and validation of radiation transport codes and evaluated nuclear data used in the analysis of criticality accident alarm systems. This series of experiments consisted of three single-pulsed experiments with the SILENE reactor. For the first experiment, the reactor was bare (unshielded), whereas in the second and third experiments, it was shielded by lead and polyethylene, respectively. The polyethylene shield of the third experiment had a cadmium liner on its internal and external surfaces, which vertically was located near the fuel region of SILENE. During each experiment, several neutron activation foils and thermoluminescent dosimeters (TLDs) were placed around the reactor. Nearly half of the foils and TLDs had additional high-density magnetite concrete, high-density barite concrete, standard concrete, and/or BoroBond shields. CEA Saclay provided all the concrete, and the US Y-12 National Security Complex provided the BoroBond. Measurement data from the experiments were published at the 2011 International Conference on Nuclear Criticality (ICNC 2011) and the 2013 Nuclear Criticality Safety Division (NCSD 2013) topical meeting. Preliminary computational results for the first experiment were presented in the ICNC 2011 paper, which showed poor agreement between the computational results and the measured values of the foils shielded by concrete. Recently the hydrogen content, boron content, and density of these concrete shields were further investigated within the constraints of the previously available data. New computational results for the first experiment are now available

  6. Criticality benchmark guide for light-water-reactor fuel in transportation and storage packages

    International Nuclear Information System (INIS)

    Lichtenwalter, J.J.; Bowman, S.M.; DeHart, M.D.; Hopper, C.M.

    1997-03-01

    This report is designed as a guide for performing criticality benchmark calculations for light-water-reactor (LWR) fuel applications. The guide provides documentation of 180 criticality experiments with geometries, materials, and neutron interaction characteristics representative of transportation packages containing LWR fuel or uranium oxide pellets or powder. These experiments should benefit the U.S. Nuclear Regulatory Commission (NRC) staff and licensees in validation of computational methods used in LWR fuel storage and transportation concerns. The experiments are classified by key parameters such as enrichment, water/fuel volume, hydrogen-to-fissile ratio (H/X), and lattice pitch. Groups of experiments with common features such as separator plates, shielding walls, and soluble boron are also identified. In addition, a sample validation using these experiments and a statistical analysis of the results are provided. Recommendations for selecting suitable experiments and determination of calculational bias and uncertainty are presented as part of this benchmark guide

  7. Benchmark experiments at ASTRA facility on definition of space distribution of 235U fission reaction rate

    International Nuclear Information System (INIS)

    Bobrov, A. A.; Boyarinov, V. F.; Glushkov, A. E.; Glushkov, E. S.; Kompaniets, G. V.; Moroz, N. P.; Nevinitsa, V. A.; Nosov, V. I.; Smirnov, O. N.; Fomichenko, P. A.; Zimin, A. A.

    2012-01-01

    Results of critical experiments performed at five ASTRA facility configurations modeling the high-temperature helium-cooled graphite-moderated reactors are presented. Results of experiments on definition of space distribution of 235 U fission reaction rate performed at four from these five configurations are presented more detail. Analysis of available information showed that all experiments on criticality at these five configurations are acceptable for use them as critical benchmark experiments. All experiments on definition of space distribution of 235 U fission reaction rate are acceptable for use them as physical benchmark experiments. (authors)

  8. Completion of the first approach to critical for the seven percent critical experiment

    International Nuclear Information System (INIS)

    Barber, A. D.; Harms, G. A.

    2009-01-01

    The first approach-to-critical experiment in the Seven Percent Critical Experiment series was recently completed at Sandia. This experiment is part of the Seven Percent Critical Experiment which will provide new critical and reactor physics benchmarks for fuel enrichments greater than five weight percent. The inverse multiplication method was used to determine the state of the system during the course of the experiment. Using the inverse multiplication method, it was determined that the critical experiment went slightly supercritical with 1148 fuel elements in the fuel array. The experiment is described and the results of the experiment are presented. (authors)

  9. Validation of the Continuous-Energy Monte Carlo Criticality-Safety Analysis System MVP and JENDL-3.2 Using the Internationally Evaluated Criticality Benchmarks

    International Nuclear Information System (INIS)

    Mitake, Susumu

    2003-01-01

    Validation of the continuous-energy Monte Carlo criticality-safety analysis system, comprising the MVP code and neutron cross sections based on JENDL-3.2, was examined using benchmarks evaluated in the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments'. Eight experiments (116 configurations) for the plutonium solution and plutonium-uranium mixture systems performed at Valduc, Battelle Pacific Northwest Laboratories, and other facilities were selected and used in the studies. The averaged multiplication factors calculated with MVP and MCNP-4B using the same neutron cross-section libraries based on JENDL-3.2 were in good agreement. Based on methods provided in the Japanese nuclear criticality-safety handbook, the estimated criticality lower-limit multiplication factors to be used as a subcriticality criterion for the criticality-safety evaluation of nuclear facilities were obtained. The analysis proved the applicability of the MVP code to the criticality-safety analysis of nuclear fuel facilities, particularly to the analysis of systems fueled with plutonium and in homogeneous and thermal-energy conditions

  10. Analysis and evaluation of critical experiments for validation of neutron transport calculations

    International Nuclear Information System (INIS)

    Bazzana, S.; Blaumann, H; Marquez Damian, J.I

    2009-01-01

    The calculation schemes, computational codes and nuclear data used in neutronic design require validation to obtain reliable results. In the nuclear criticality safety field this reliability also translates into a higher level of safety in procedures involving fissile material. The International Criticality Safety Benchmark Evaluation Project is an OECD/NEA activity led by the United States, in which participants from over 20 countries evaluate and publish criticality safety benchmarks. The product of this project is a set of benchmark experiment evaluations that are published annually in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. With the recent participation of Argentina, this information is now available for use by the neutron calculation and criticality safety groups in Argentina. This work presents the methodology used for the evaluation of experimental data, some results obtained by the application of these methods, and some examples of the data available in the Handbook. [es

  11. GROWTH OF THE INTERNATIONAL CRITICALITY SAFETY AND REACTOR PHYSICS EXPERIMENT EVALUATION PROJECTS

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; John D. Bess; Jim Gulliford

    2011-09-01

    Since the International Conference on Nuclear Criticality Safety (ICNC) 2007, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) have continued to expand their efforts and broaden their scope. Eighteen countries participated on the ICSBEP in 2007. Now, there are 20, with recent contributions from Sweden and Argentina. The IRPhEP has also expanded from eight contributing countries in 2007 to 16 in 2011. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments1' have increased from 442 evaluations (38000 pages), containing benchmark specifications for 3955 critical or subcritical configurations to 516 evaluations (nearly 55000 pages), containing benchmark specifications for 4405 critical or subcritical configurations in the 2010 Edition of the ICSBEP Handbook. The contents of the Handbook have also increased from 21 to 24 criticality-alarm-placement/shielding configurations with multiple dose points for each, and from 20 to 200 configurations categorized as fundamental physics measurements relevant to criticality safety applications. Approximately 25 new evaluations and 150 additional configurations are expected to be added to the 2011 edition of the Handbook. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Reactor Physics Benchmark Experiments2' have increased from 16 different experimental series that were performed at 12 different reactor facilities to 53 experimental series that were performed at 30 different reactor facilities in the 2011 edition of the Handbook. Considerable effort has also been made to improve the functionality of the searchable database, DICE (Database for the International Criticality Benchmark Evaluation Project) and verify the accuracy of the data contained therein. DICE will be discussed in separate papers at ICNC 2011. The status of the

  12. Benchmark critical experiments on low-enriched uranium oxide systems with H/U = 0.77

    International Nuclear Information System (INIS)

    Tuck, G.; Oh, I.

    1979-08-01

    Ten benchmark experiments were performed at the Critical Mass Laboratory at Rockwell International's Rocky Flats Plant, Golden, Colorado, for the US Nuclear Regulatory Commission. They provide accurate criticality data for low-enriched damp uranium oxide (U 3 O 8 ) systems. The core studied consisted of 152 mm cubical aluminum cans containing an average of 15,129 g of low-enriched (4.46% 235 U) uranium oxide compacted to a density of 4.68 g/cm 3 and with an H/U atomic ratio of 0.77. One hundred twenty five (125) of these cans were arranged in an approx. 770 mm cubical array. Since the oxide alone cannot be made critical in an array of this size, an enriched (approx. 93% 235 U) metal or solution driver was used to achieve criticality. Measurements are reported for systems having the least practical reflection and for systems reflected by approx. 254-mm-thick concrete or plastic. Under the three reflection conditions, the mass of the uranium metal driver ranged from 29.87 kg to 33.54 kg for an oxide core of 1864.6 kg. For an oxide core of 1824.9 kg, the weight of the high concentration (351.2 kg U/m 3 ) solution driver varied from 14.07 kg to 16.14 kg, and the weight of the low concentration (86.4 kg U/m 3 ) solution driver from 12.4 kg to 14.0 kg

  13. Use of Sensitivity and Uncertainty Analysis to Select Benchmark Experiments for the Validation of Computer Codes and Data

    International Nuclear Information System (INIS)

    Elam, K.R.; Rearden, B.T.

    2003-01-01

    Sensitivity and uncertainty analysis methodologies under development at Oak Ridge National Laboratory were applied to determine whether existing benchmark experiments adequately cover the area of applicability for the criticality code and data validation of PuO 2 and mixed-oxide (MOX) powder systems. The study examined three PuO 2 powder systems and four MOX powder systems that would be useful for establishing mass limits for a MOX fuel fabrication facility. Using traditional methods to choose experiments for criticality analysis validation, 46 benchmark critical experiments were identified as applicable to the PuO 2 powder systems. However, only 14 experiments were thought to be within the area of applicability for dry MOX powder systems.The applicability of 318 benchmark critical experiments, including the 60 experiments initially identified, was assessed. Each benchmark and powder system was analyzed using the Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) one-dimensional (TSUNAMI-1D) or TSUNAMI three-dimensional (TSUNAMI-3D) sensitivity analysis sequences, which will be included in the next release of the SCALE code system. This sensitivity data and cross-section uncertainty data were then processed with TSUNAMI-IP to determine the correlation of each application to each experiment in the benchmarking set. Correlation coefficients are used to assess the similarity between systems and determine the applicability of one system for the code and data validation of another.The applicability of most of the experiments identified using traditional methods was confirmed by the TSUNAMI analysis. In addition, some PuO 2 and MOX powder systems were determined to be within the area of applicability of several other benchmarks that would not have been considered using traditional methods. Therefore, the number of benchmark experiments useful for the validation of these systems exceeds the number previously expected. The TSUNAMI analysis

  14. Status of international benchmark experiment for effective delayed neutron fraction ({beta}eff)

    Energy Technology Data Exchange (ETDEWEB)

    Okajima, S.; Sakurai, T.; Mukaiyama, T. [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-03-01

    To improve the prediction accuracy of the {beta}eff, the program of the international benchmark experiment (Beta Effect Reactor Experiment for a New International Collaborative Evaluation: BERNICE) was planned. This program composed of two parts; BERNICE-MASURCA and BERNICE-FCA. The former one was carried out in the fast critical facility MASURCA of CEA, FRANCE between 1993 and 1994. The latter one started in the FCA, JAERI in 1995 and still is going. In these benchmark experiments, various experimental techniques have been applied for in-pile measurements of the {beta}eff. The accuracy of the measurements was better than 3%. (author)

  15. Criticality Experiments Performed in Saclay and Valduc Centers, France (1958-2002)

    International Nuclear Information System (INIS)

    Barbry, F.; Grivot, P.; Girault, E.; Fouillaud, P.; Cousinou, P.; Poullot, G.; Anno, J.; Bordy, J.M.; Doutriaux, D.

    2003-01-01

    Since 1958, the Commissariat a l'Energie Atomique and then the Institut de Radioprotection et de Surete Nucleaire (previously the Institut de Protection et de Surete Nucleaire) have carried out criticality experiments first in Saclay and then in the Valduc criticality laboratory. This paper is a survey of the programs conducted during the last 45 yr with the different apparatuses. This paper also gives information about plans for the future. Programs are presented following the chronology and the International Criticality Safety Benchmark Evaluation Project classification. Among the numerous series of experiments, now 22 series (corresponding to 407 configurations) have been included in the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments'

  16. Benchmark assemblies of the Los Alamos critical assemblies facility

    International Nuclear Information System (INIS)

    Dowdy, E.J.

    1986-01-01

    Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described. (author)

  17. Benchmark assemblies of the Los Alamos Critical Assemblies Facility

    International Nuclear Information System (INIS)

    Dowdy, E.J.

    1985-01-01

    Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described

  18. Benchmark assemblies of the Los Alamos critical assemblies facility

    International Nuclear Information System (INIS)

    Dowdy, E.J.

    1985-01-01

    Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described

  19. Criticality reference benchmark calculations for burnup credit using spent fuel isotopics

    International Nuclear Information System (INIS)

    Bowman, S.M.

    1991-04-01

    To date, criticality analyses performed in support of the certification of spent fuel casks in the United States do not take credit for the reactivity reduction that results from burnup. By taking credit for the fuel burnup, commonly referred to as ''burnup credit,'' the fuel loading capacity of these casks can be increased. One of the difficulties in implementing burnup credit in criticality analyses is that there have been no critical experiments performed with spent fuel which can be used for computer code validation. In lieu of that, a reference problem set of fresh fuel critical experiments which model various conditions typical of light water reactor (LWR) transportation and storage casks has been identified and used in the validation of SCALE-4. This report documents the use of this same problem set to perform spent fuel criticality benchmark calculations by replacing the actual fresh fuel isotopics from the experiments with six different sets of calculated spent fuel isotopics. The SCALE-4 modules SAS2H and CSAS4 were used to perform the analyses. These calculations do not model actual critical experiments. The calculated k-effectives are not supposed to equal unity and will vary depending on the initial enrichment and burnup of the calculated spent fuel isotopics. 12 refs., 11 tabs

  20. The impact and applicability of critical experiment evaluations

    Energy Technology Data Exchange (ETDEWEB)

    Brewer, R. [Los Alamos National Lab., NM (United States)

    1997-06-01

    This paper very briefly describes a project to evaluate previously performed critical experiments. The evaluation is intended for use by criticality safety engineers to verify calculations, and may also be used to identify data which need further investigation. The evaluation process is briefly outlined; the accepted benchmark critical experiments will be used as a standard for verification and validation. The end result of the project will be a comprehensive reference document.

  1. Criticality safety benchmarking of PASC-3 and ECNJEF1.1

    International Nuclear Information System (INIS)

    Li, J.

    1992-09-01

    To validate the code system PASC-3 and the multigroup cross section library ECNJEF1.1 on various applications many benchmarks are required. This report presents the results of critically safety benchmarking for five calculational and four experimental benchmarks. These benchmarks are related to the transport package of fissile materials such as spent fuel. The fissile nuclides in these benchmarks are 235 U and 239 Pu. The modules of PASC-3 which have been used for the calculations are BONAMI, NITAWL and KENO.5A. The final results for the experimental benchmarks do agree well with experimental data. For the calculational benchmarks the results presented here are in reasonable agreement with the results from other investigations. (author). 8 refs.; 20 figs.; 5 tabs

  2. MCNP calculations for criticality-safety benchmarks with ENDF/B-V and ENDF/B-VI libraries

    International Nuclear Information System (INIS)

    Iverson, J.L.; Mosteller, R.D.

    1995-01-01

    The MCNP Monte Carlo code, in conjunction with its continuous-energy ENDF/B-V and ENDF/B-VI cross-section libraries, has been benchmarked against results from 27 different critical experiments. The predicted values of k eff are in excellent agreement with the benchmarks, except for the ENDF/B-V results for solutions of plutonium nitrate and, to a lesser degree, for the ENDF/B-V and ENDF/B-VI results for a bare sphere of 233 U

  3. Pool critical assembly pressure vessel facility benchmark

    International Nuclear Information System (INIS)

    Remec, I.; Kam, F.B.K.

    1997-07-01

    This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used

  4. Overview of Experiments for Physics of Fast Reactors from the International Handbooks of Evaluated Criticality Safety Benchmark Experiments and Evaluated Reactor Physics Benchmark Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Bess, J. D.; Briggs, J. B.; Gulliford, J.; Ivanova, T.; Rozhikhin, E. V.; Semenov, M. Yu.; Tsibulya, A. M.; Koscheev, V. N.

    2017-07-01

    Overview of Experiments to Study the Physics of Fast Reactors Represented in the International Directories of Critical and Reactor Experiments John D. Bess Idaho National Laboratory Jim Gulliford, Tatiana Ivanova Nuclear Energy Agency of the Organisation for Economic Cooperation and Development E.V.Rozhikhin, M.Yu.Sem?nov, A.M.Tsibulya Institute of Physics and Power Engineering The study the physics of fast reactors traditionally used the experiments presented in the manual labor of the Working Group on Evaluation of sections CSEWG (ENDF-202) issued by the Brookhaven National Laboratory in 1974. This handbook presents simplified homogeneous model experiments with relevant experimental data, as amended. The Nuclear Energy Agency of the Organization for Economic Cooperation and Development coordinates the activities of two international projects on the collection, evaluation and documentation of experimental data - the International Project on the assessment of critical experiments (1994) and the International Project on the assessment of reactor experiments (since 2005). The result of the activities of these projects are replenished every year, an international directory of critical (ICSBEP Handbook) and reactor (IRPhEP Handbook) experiments. The handbooks present detailed models of experiments with minimal amendments. Such models are of particular interest in terms of the settlements modern programs. The directories contain a large number of experiments which are suitable for the study of physics of fast reactors. Many of these experiments were performed at specialized critical stands, such as BFS (Russia), ZPR and ZPPR (USA), the ZEBRA (UK) and the experimental reactor JOYO (Japan), FFTF (USA). Other experiments, such as compact metal assembly, is also of interest in terms of the physics of fast reactors, they have been carried out on the universal critical stands in Russian institutes (VNIITF and VNIIEF) and the US (LANL, LLNL, and others.). Also worth mentioning

  5. An improved benchmark model for the Big Ten critical assembly - 021

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    2010-01-01

    A new benchmark specification is developed for the BIG TEN uranium critical assembly. The assembly has a fast spectrum, and its core contains approximately 10 wt.% enriched uranium. Detailed specifications for the benchmark are provided, and results from the MCNP5 Monte Carlo code using a variety of nuclear-data libraries are given for this benchmark and two others. (authors)

  6. Benchmarking the new JENDL-4.0 library on criticality experiments of a research reactor with oxide LEU (20 w/o) fuel, light water moderator and beryllium reflectors

    International Nuclear Information System (INIS)

    Liem, Peng Hong; Sembiring, Tagor Malem

    2012-01-01

    Highlights: ► Benchmark calculations of the new JENDL-4.0 library. ► Thermal research reactor with oxide LEU fuel, H 2 O moderator and Be reflector. ► JENDL-4.0 library shows better C/E values for criticality evaluations. - Abstract: Benchmark calculations of the new JENDL-4.0 library on the criticality experiments of a thermal research reactor with oxide low enriched uranium (LEU, 20 w/o) fuel, light water moderator and beryllium reflector (RSG GAS) have been conducted using a continuous energy Monte Carlo code, MVP-II. The JENDL-4.0 library shows better C/E values compared to the former library JENDL-3.3 and other world-widely used latest libraries (ENDF/B-VII.0 and JEFF-3.1).

  7. Performance assessment of new neutron cross section libraries using MCNP code and some critical benchmarks

    International Nuclear Information System (INIS)

    Bakkari, B El; Bardouni, T El.; Erradi, L.; Chakir, E.; Meroun, O.; Azahra, M.; Boukhal, H.; Khoukhi, T El.; Htet, A.

    2007-01-01

    Full text: New releases of nuclear data files made available during the few recent years. The reference MCNP5 code (1) for Monte Carlo calculations is usually distributed with only one standard nuclear data library for neutron interactions based on ENDF/B-VI. The main goal of this work is to process new neutron cross sections libraries in ACE continuous format for MCNP code based on the most recent data files recently made available for the scientific community : ENDF/B-VII.b2, ENDF/B-VI (release 8), JEFF3.0, JEFF-3.1, JENDL-3.3 and JEF2.2. In our data treatment, we used the modular NJOY system (release 99.9) (2) in conjunction with its most recent upadates. Assessment of the processed point wise cross sections libraries performances was made by means of some criticality prediction and analysis of other integral parameters for a set of reactor benchmarks. Almost all the analyzed benchmarks were taken from the international handbook of Evaluated criticality safety benchmarks experiments from OECD (3). Some revised benchmarks were taken from references (4,5). These benchmarks use Pu-239 or U-235 as the main fissionable materiel in different forms, different enrichments and cover various geometries. Monte Carlo calculations were performed in 3D with maximum details of benchmark description and the S(α,β) cross section treatment was adopted in all thermal cases. The resulting one standard deviation confidence interval for the eigenvalue is typically +/-13% to +/-20 pcm [fr

  8. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    Science.gov (United States)

    Murata, Isao; Ohta, Masayuki; Kusaka, Sachie; Sato, Fuminobu; Miyamaru, Hiroyuki

    2017-09-01

    There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author's group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is "equally" due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A)) making neutrons conveying the contribution, indirect controbution of neutrons (B) making the neutrons (A) and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  9. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    Directory of Open Access Journals (Sweden)

    Murata Isao

    2017-01-01

    Full Text Available There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author’s group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is “equally” due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A making neutrons conveying the contribution, indirect controbution of neutrons (B making the neutrons (A and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  10. Benchmarking ENDF/B-VII.0

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2006-01-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2 O, H 2 O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  11. IRPhEP-handbook, International Handbook of Evaluated Reactor Physics Benchmark Experiments

    International Nuclear Information System (INIS)

    Sartori, Enrico; Blair Briggs, J.

    2008-01-01

    experimental series that were performed at 17 different reactor facilities. The Handbook is organized in a manner that allows easy inclusion of additional evaluations, as they become available. Additional evaluations are in progress and will be added to the handbook periodically. Content: FUND - Fundamental; GCR - Gas Cooled (Thermal) Reactor; HWR - Heavy Water Moderated Reactor; LMFR - Liquid Metal Fast Reactor; LWR - Light Water Moderated Reactor; PWR - Pressurized Water Reactor; VVER - VVER Reactor; Evaluations published as drafts 2 - Related Information: International Criticality Safety Benchmark Evaluation Project (ICSBEP); IRPHE/B and W-SS-LATTICE, Spectral Shift Reactor Lattice Experiments; IRPHE-JAPAN, Reactor Physics Experiments carried out in Japan ; IRPHE/JOYO MK-II, JOYO MK-II core management and characteristics database ; IRPhE/RRR-SEG, Reactor Physics Experiments from Fast-Thermal Coupled Facility; IRPHE-SNEAK, KFK SNEAK Fast Reactor Experiments, Primary Documentation ; IRPhE/STEK, Reactor Physics Experiments from Fast-Thermal Coupled Facility ; IRPHE-ZEBRA, AEEW Fast Reactor Experiments, Primary Documentation ; IRPHE-DRAGON-DPR, OECD High Temperature Reactor Dragon Project, Primary Documents; IRPHE-ARCH-01, Archive of HTR Primary Documents ; IRPHE/AVR, AVR High Temperature Reactor Experience, Archival Documentation ; IRPHE-KNK-II-ARCHIVE, KNK-II fast reactor documents, power history and measured parameters; IRPhE/BERENICE, effective delayed neutron fraction measurements ; IRPhE-TAPIRO-ARCHIVE, fast neutron source reactor primary documents, reactor physics experiments. The International Handbook of Evaluated Reactor Physics Benchmark Experiments was prepared by a working party comprised of experienced reactor physics personnel from Belgium, Brazil, Canada, P.R. of China, Germany, Hungary, Japan, Republic of Korea, Russian Federation, Switzerland, United Kingdom, and the United States of America. The IRPhEP Handbook is available to authorised requesters from the

  12. Uranium systems to enhance benchmarks for use in the verification of criticality safety computer models. Final report, February 16, 1990--December 31, 1994

    International Nuclear Information System (INIS)

    Busch, R.D.

    1995-01-01

    Dr. Robert Busch of the Department of Chemical and Nuclear Engineering was the principal investigator on this project with technical direction provided by the staff in the Nuclear Criticality Safety Group at Los Alamos. During the period of the contract, he had a number of graduate and undergraduate students working on subtasks. The objective of this work was to develop information on uranium systems to enhance benchmarks for use in the verification of criticality safety computer models. During the first year of this project, most of the work was focused on setting up the SUN SPARC-1 Workstation and acquiring the literature which described the critical experiments. By august 1990, the Workstation was operational with the current version of TWODANT loaded on the system. MCNP, version 4 tape was made available from Los Alamos late in 1990. Various documents were acquired which provide the initial descriptions of the critical experiments under consideration as benchmarks. The next four years were spent working on various benchmark projects. A number of publications and presentations were made on this material. These are briefly discussed in this report

  13. Critical experiments AT Pacific Northwest Laboratory

    International Nuclear Information System (INIS)

    Clayton, E.D.; Bierman, S.R.

    1984-01-01

    After a short description of the facility, a brief listing of the principal types of fuel forms and assembly geometries is provided. A number of experiments have recently been performed on plain fissionable units, or isolated assemblies of single units, that include measurements on solutions composed of Pu-U mixtures and critical experiment data on lattices of low enriched uranium in water. Experiments have been performed on planar arrays of containers with Pu solutions because of the lack of data in this field concerning the safe storage of nuclear fuel; others have been conducted on arrays of low enriched U lattice assemblies. Neutronic measurements to date have shown they can be used to provide additional benchmark data for improvement and validation of criticality codes. Studies have previously been made to ascertain the need for critical experiments in support of fuel recycle operations. The result of an effort to update the list of needed critical experiments is summarized in this section. Experiments are listed in support of uranium based fuels and fast breeder reactor fuels. An effort is made to identify those areas within the fuel cycle wherein the critical experiment data would be applied and to identify the experiments (and data) required to fulfill the needs in each of these areas. The type and form of fuel on which the data would be obtained also are identified. In presenting this information, no attempt is made to describe the experiments in detail, or to define the actual number of critical experiments that might be needed to provide the required data

  14. RB reactor benchmark cores

    International Nuclear Information System (INIS)

    Pesic, M.

    1998-01-01

    A selected set of the RB reactor benchmark cores is presented in this paper. The first results of validation of the well-known Monte Carlo MCNP TM code and adjoining neutron cross section libraries are given. They confirm the idea for the proposal of the new U-D 2 O criticality benchmark system and support the intention to include this system in the next edition of the recent OECD/NEA Project: International Handbook of Evaluated Criticality Safety Experiment, in near future. (author)

  15. Catalog and history of the experiments of criticality Saclay (1958-1964) Valduc / Building 10 (1964-2003)

    International Nuclear Information System (INIS)

    Poullot, G.; Dumont, V.; Anno, J.; Cousinou, P.; Grivot, P.; Girault, E.; Fouillaud, P.; Barbry, F.

    2003-01-01

    The group ' International Criticality Safety Evaluation Benchmark evaluation project ' (I.C.S.B.E.P.) has for aim to supply to the international community experiments of benchmarks criticality, of certified quality, used to guarantee the qualification of criticality calculation codes. Have been defined: a structure of experiments classification, a format of standard presentation, a structure of work with evaluation, internal and external checks, presentation in plenary session. After favourable opinion of the work group, the synthesis document called evaluation is integrated to the general report I.C.S.B.E.P. (N.C.)

  16. Benchmark experiments to test plutonium and stainless steel cross sections. Topical report

    International Nuclear Information System (INIS)

    Jenquin, U.P.; Bierman, S.R.

    1978-06-01

    The Nuclear Regulatory Commission (NRC) commissioned Battelle, Pacific Northwest Laboratory (PNL) to ascertain the accuracy of the neutron cross sections for the isotopes of plutonium and the constituents of stainless steel and determine if improvements can be made in application to criticality safety analysis. NRC's particular area of interest is in the transportation of light-water reactor spent fuel assemblies. The project was divided into two tasks. The first task was to define a set of integral experimental measurements (benchmarks). The second task is to use these benchmarks in neutronics calculations such that the accuracy of ENDF/B-IV plutonium and stainless steel cross sections can be assessed. The results of the first task are given in this report. A set of integral experiments most pertinent to testing the cross sections has been identified and the code input data for calculating each experiment has been developed

  17. Criticality Benchmark Results Using Various MCNP Data Libraries

    International Nuclear Information System (INIS)

    Frankle, Stephanie C.

    1999-01-01

    A suite of 86 criticality benchmarks has been recently implemented in MCNPtrademark as part of the nuclear data validation effort. These benchmarks have been run using two sets of MCNP continuous-energy neutron data: ENDF/B-VI based data through Release 2 (ENDF60) and the ENDF/B-V based data. New evaluations were completed for ENDF/B-VI for a number of the important nuclides such as the isotopes of H, Be, C, N, O, Fe, Ni, 235,238 U, 237 Np, and 239,240 Pu. When examining the results of these calculations for the five manor categories of 233 U, intermediate-enriched 235 U (IEU), highly enriched 235 U (HEU), 239 Pu, and mixed metal assembles, we find the following: (1) The new evaluations for 9 Be, 12 C, and 14 N show no net effect on k eff ; (2) There is a consistent decrease in k eff for all of the solution assemblies for ENDF/B-VI due to 1 H and 16 O, moving k eff further from the benchmark value for uranium solutions and closer to the benchmark value for plutonium solutions; (3) k eff decreased for the ENDF/B-VI Fe isotopic data, moving the calculated k eff further from the benchmark value; (4) k eff decreased for the ENDF/B-VI Ni isotopic data, moving the calculated k eff closer to the benchmark value; (5) The W data remained unchanged and tended to calculate slightly higher than the benchmark values; (6) For metal uranium systems, the ENDF/B-VI data for 235 U tends to decrease k eff while the 238 U data tends to increase k eff . The net result depends on the energy spectrum and material specifications for the particular assembly; (7) For more intermediate-energy systems, the changes in the 235,238 U evaluations tend to increase k eff . For the mixed graphite and normal uranium-reflected assembly, a large increase in k eff due to changes in the 238 U evaluation moved the calculated k eff much closer to the benchmark value. (8) There is little change in k eff for the uranium solutions due to the new 235,238 U evaluations; and (9) There is little change in k eff

  18. Analysis of benchmark critical experiments with ENDF/B-VI data sets

    International Nuclear Information System (INIS)

    Hardy, J. Jr.; Kahler, A.C.

    1991-01-01

    Several clean critical experiments were analyzed with ENDF/B-VI data to assess the adequacy of the data for U 235 , U 238 and oxygen. These experiments were (1) a set of homogeneous U 235 -H 2 O assemblies spanning a wide range of hydrogen/uranium ratio, and (2) TRX-1, a simple, H 2 O-moderated Bettis lattice of slightly-enriched uranium metal rods. The analyses used the Monte Carlo program RCP01, with explicit three-dimensional geometry and detailed representation of cross sections. For the homogeneous criticals, calculated k crit values for large, thermal assemblies show good agreement with experiment. This supports the evaluated thermal criticality parameters for U 235 . However, for assemblies with smaller H/U ratios, k crit values increase significantly with increasing leakage and flux-spectrum hardness. These trends suggest that leakage is underpredicted and that the resonance eta of the ENDF/B-VI U 235 is too large. For TRX-1, reasonably good agreement is found with measured lattice parameters (reaction-rate ratios). Of primary interest is rho28, the ratio of above-thermal to thermal U 238 capture. Calculated rho28 is 2.3 (± 1.7) % above measurement, suggesting that U 238 resonance capture remains slightly overpredicted with ENDF/B-VI. However, agreement is better than observed with earlier versions of ENDF/B

  19. Pre-evaluation of fusion shielding benchmark experiment

    International Nuclear Information System (INIS)

    Hayashi, K.; Handa, H.; Konno, C.

    1994-01-01

    Shielding benchmark experiment is very useful to test the design code and nuclear data for fusion devices. There are many types of benchmark experiments that should be done in fusion shielding problems, but time and budget are limited. Therefore it will be important to select and determine the effective experimental configurations by precalculation before the experiment. The authors did three types of pre-evaluation to determine the experimental assembly configurations of shielding benchmark experiments planned in FNS, JAERI. (1) Void Effect Experiment - The purpose of this experiment is to measure the local increase of dose and nuclear heating behind small void(s) in shield material. Dimension of the voids and its arrangements were decided as follows. Dose and nuclear heating were calculated both for with and without void(s). Minimum size of the void was determined so that the ratio of these two results may be larger than error of the measurement system. (2) Auxiliary Shield Experiment - The purpose of this experiment is to measure shielding properties of B 4 C, Pb, W, and dose around superconducting magnet (SCM). Thickness of B 4 C, Pb, W and their arrangement including multilayer configuration were determined. (3) SCM Nuclear Heating Experiment - The purpose of this experiment is to measure nuclear heating and dose distribution in SCM material. Because it is difficult to use liquid helium as a part of SCM mock up material, material composition of SCM mock up are surveyed to have similar nuclear heating property of real SCM composition

  20. Benchmark experiments of effective delayed neutron fraction βeff at FCA

    International Nuclear Information System (INIS)

    Sakurai, Takeshi; Okajima, Shigeaki

    1999-01-01

    Benchmark experiments of effective delayed neutron fraction β eff were performed at Fast Critical Assembly (FCA) in the Japan Atomic Energy Research Institute. The experiments were made in three cores providing systematic change of nuclide contribution to the β eff : XIX-1 core fueled with 93% enriched uranium, XIX-2 core fueled with plutonium and uranium (23% enrichment) and XIX-3 core fueled with plutonium (92% fissile Pu). Six organizations from five countries participated in these experiments and measured the β eff by using their own methods and instruments. Target accuracy in the β eff was achieved to be better than ±3% by averaging the β eff values measured using a wide variety of experimental methods. (author)

  1. Validation of the ABBN/CONSYST constants system. Part 1: Validation through the critical experiments on compact metallic cores

    International Nuclear Information System (INIS)

    Ivanova, T.T.; Manturov, G.N.; Nikolaev, M.N.; Rozhikhin, E.V.; Semenov, M.Yu.; Tsiboulia, A.M.

    1999-01-01

    Worldwide compilation of criticality safety benchmark experiments, evaluated due to an activity of the International Criticality Safety Benchmark Evaluation Project (ICSBEP), discovers new possibilities for validation of the ABBN-93.1 cross section library for criticality safety analysis. Results of calculations of small assemblies with metal-fuelled cores are presented in this paper. It is concluded that ABBN-93.1 predicts criticality of such systems with required accuracy

  2. Specifications, Pre-Experimental Predictions, and Test Plate Characterization Information for the Prometheus Critical Experiments

    International Nuclear Information System (INIS)

    ML Zerkle; ME Meyers; SM Tarves; JJ Powers

    2006-01-01

    This report provides specifications, pre-experimental predictions, and test plate characterization information for a series of molybdenum (Mo), niobium (Nb), rhenium (Re), tantalum (Ta), and baseline critical experiments that were developed by the Naval Reactors Prime Contractor Team (NRPCT) for the Prometheus space reactor development project. In March 2004, the Naval Reactors program was assigned the responsibility to develop, design, deliver, and operationally support civilian space nuclear reactors for NASA's Project Prometheus. The NRPCT was formed to perform this work and consisted of engineers and scientists from the Naval Reactors (NR) Program prime contractors: Bettis Atomic Power Laboratory, Knolls Atomic Power Laboratory (KAPL), and Bechtel Plant Machinery Inc (BPMI). The NRPCT developed a series of clean benchmark critical experiments to address fundamental uncertainties in the neutron cross section data for Mo, Nb, Re, and Ta in fast, intermediate, and mixed neutron energy spectra. These experiments were to be performed by Los Alamos National Laboratory (LANL) using the Planet vertical lift critical assembly machine and were designed with a simple, geometrically clean, cylindrical configuration consisting of alternating layers of test, moderator/reflector, and fuel materials. Based on reprioritization of missions and funding within NASA, Naval Reactors and NASA discontinued their collaboration on Project Prometheus in September 2005. One critical experiment and eighteen subcritical handstacking experiments were completed prior to the termination of work in September 2005. Information on the Prometheus critical experiments and the test plates produced for these experiments are expected to be of value to future space reactor development programs and to integral experiments designed to address the fundamental neutron cross section uncertainties for these refractory metals. This information is being provided as an orderly closeout of NRPCT work on Project

  3. Review of studies on criticality safety evaluation and criticality experiment methods

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Yamamoto, Toshihiro; Misawa, Tsuyoshi; Yamane, Yuichi

    2013-01-01

    Since the early 1960s, many studies on criticality safety evaluation have been conducted in Japan. Computer code systems were developed initially by employing finite difference methods, and more recently by using Monte Carlo methods. Criticality experiments have also been carried out in many laboratories in Japan as well as overseas. By effectively using these study results, the Japanese Criticality Safety Handbook was published in 1988, almost the intermediate point of the last 50 years. An increased interest has been shown in criticality safety studies, and a Working Party on Nuclear Criticality Safety (WPNCS) was set up by the Nuclear Science Committee of Organisation Economic Co-operation and Development in 1997. WPNCS has several task forces in charge of each of the International Criticality Safety Benchmark Evaluation Program (ICSBEP), Subcritical Measurement, Experimental Needs, Burn-up Credit Studies and Minimum Critical Values. Criticality safety studies in Japan have been carried out in cooperation with WPNCS. This paper describes criticality safety study activities in Japan along with the contents of the Japanese Criticality Safety Handbook and the tasks of WPNCS. (author)

  4. MCNP simulation of the TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Jeraj, R.; Glumac, B.; Maucec, M.

    1996-01-01

    The complete 3D MCNP model of the TRIGA Mark II reactor is presented. It enables precise calculations of some quantities of interest in a steady-state mode of operation. Calculational results are compared to the experimental results gathered during reactor reconstruction in 1992. Since the operating conditions were well defined at that time, the experimental results can be used as a benchmark. It may be noted that this benchmark is one of very few high enrichment benchmarks available. In our simulations experimental conditions were thoroughly simulated: fuel elements and control rods were precisely modeled as well as entire core configuration and the vicinity of the core. ENDF/B-VI and ENDF/B-V libraries were used. Partial results of benchmark calculations are presented. Excellent agreement of core criticality, excess reactivity and control rod worths can be observed. (author)

  5. A Critical Thinking Benchmark for a Department of Agricultural Education and Studies

    Science.gov (United States)

    Perry, Dustin K.; Retallick, Michael S.; Paulsen, Thomas H.

    2014-01-01

    Due to an ever changing world where technology seemingly provides endless answers, today's higher education students must master a new skill set reflecting an emphasis on critical thinking, problem solving, and communications. The purpose of this study was to establish a departmental benchmark for critical thinking abilities of students majoring…

  6. Effects of existing evaluated nuclear data files on neutronics characteristics of the BFS-62-3A critical assembly benchmark model

    International Nuclear Information System (INIS)

    Semenov, Mikhail

    2002-11-01

    This report is continuation of studying of the experiments performed on BFS-62-3A critical assembly in Russia. The objective of work is definition of the cross section uncertainties on reactor neutronics parameters as applied to the hybrid core of the BN-600 reactor of Beloyarskaya NPP. Two-dimensional benchmark model of BFS-62-3A was created specially for these purposes and experimental values were reduced to it. Benchmark characteristics for this assembly are 1) criticality; 2) central fission rate ratios (spectral indices); and 3) fission rate distributions in stainless steel reflector. The effects of nuclear data libraries have been studied by comparing the results calculated using available modern data libraries - ENDF/B-V, ENDF/B-VI, ENDF/B-VI-PT, JENDL-3.2 and ABBN-93. All results were computed by Monte Carlo method with the continuous energy cross-sections. The checking of the cross sections of major isotopes on wide benchmark criticality collection was made. It was shown that ENDF/B-V data underestimate the criticality of fast reactor systems up to 2% Δk. As for the rest data, the difference between each other in criticality for BFS-62-3A is around 0.6% Δk. However, taking into account the results obtained for other fast reactor benchmarks (and steel-reflected also), it may conclude that the difference in criticality calculation results can achieve 1% Δk. This value is in a good agreement with cross section uncertainty evaluated for BN-600 hybrid core (±0.6% Δk). This work is related to the JNC-IPPE Collaboration on Experimental Investigation of Excess Weapons Grade Pu Disposition in BN-600 Reactor Using BFS-2 Facility. (author)

  7. Concrete benchmark experiment: ex-vessel LWR surveillance dosimetry

    International Nuclear Information System (INIS)

    Ait Abderrahim, H.; D'Hondt, P.; Oeyen, J.; Risch, P.; Bioux, P.

    1993-09-01

    The analysis of DOEL-1 in-vessel and ex-vessel neutron dosimetry, using the DOT 3.5 Sn code coupled with the VITAMIN-C cross-section library, showed the same C/E values for different detectors at the surveillance capsule and the ex-vessel cavity positions. These results seem to be in contradiction with those obtained in several Benchmark experiments (PCA, PSF, VENUS...) when using the same computational tools. Indeed a strong decreasing radial trend of the C/E was observed, partly explained by the overestimation of the iron inelastic scattering. The flat trend seen in DOEL-1 could be explained by compensating errors in the calculation such as the backscattering due to the concrete walls outside the cavity. The 'Concrete Benchmark' experiment has been designed to judge the ability of this calculation methods to treat the backscattering. This paper describes the 'Concrete Benchmark' experiment, the measured and computed neutron dosimetry results and their comparison. This preliminary analysis seems to indicate an overestimation of the backscattering effect in the calculations. (authors). 5 figs., 1 tab., 7 refs

  8. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Kucukboyaci, Vefa N.

    2015-01-01

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  9. Benchmark calculations for critical experiments at FKBN-M facility with uranium-plutonium-polyethylene systems using JENDL-3.2 and MVP Monte-Carlo code

    International Nuclear Information System (INIS)

    Obara, Toru; Morozov, A.G.; Kevrolev, V.V.; Kuznetsov, V.V.; Treschalin, S.A.; Lukin, A.V.; Terekhin, V.A.; Sokolov, Yu.A.; Kravchenko, V.G.

    2000-01-01

    Benchmark calculations were performed for critical experiments at FKBN-M facility in RFNC-VNIITF, Russia using JENDL-3.2 nuclear data library and continuous energy Monte-Carlo code MVP. The fissile materials were high-enriched uranium and plutonium. Polyethylene was used as moderator. The neutron spectrum was changed by changing the geometry. Calculation results by MVP showed some errors. Discussion was made by reaction rates and η values obtained by MVP. It showed the possibility that cross sections of U-235 had different trend of error in fast and thermal energy region respectively. It also showed the possibility of some error of cross section of Pu-239 in high energy region. (author)

  10. Critical experiments at Sandia National Laboratories

    International Nuclear Information System (INIS)

    Harms, Gary A.; Ford, John T.; Barber, Allison Delo

    2010-01-01

    Sandia National Laboratories (SNL) has conducted radiation effects testing for the Department of Energy (DOE) and other contractors supporting the DOE since the 1960's. Over this period, the research reactor facilities at Sandia have had a primary mission to provide appropriate nuclear radiation environments for radiation testing and qualification of electronic components and other devices. The current generation of reactors includes the Annular Core Research Reactor (ACRR), a water-moderated pool-type reactor, fueled by elements constructed from UO2-BeO ceramic fuel pellets, and the Sandia Pulse Reactor III (SPR-III), a bare metal fast burst reactor utilizing a uranium-molybdenum alloy fuel. The SPR-III is currently defueled. The SPR Facility (SPRF) has hosted a series of critical experiments. A purpose-built critical experiment was first operated at the SPRF in the late 1980's. This experiment, called the Space Nuclear Thermal Propulsion Critical Experiment (CX), was designed to explore the reactor physics of a nuclear thermal rocket motor. This experiment was fueled with highly-enriched uranium carbide fuel in annular water-moderated fuel elements. The experiment program was completed and the fuel for the experiment was moved off-site. A second critical experiment, the Burnup Credit Critical Experiment (BUCCX) was operated at Sandia in 2002. The critical assembly for this experiment was based on the assembly used in the CX modified to accommodate low-enriched pin-type fuel in water moderator. This experiment was designed as a platform in which the reactivity effects of specific fission product poisons could be measured. Experiments were carried out on rhodium, an important fission product poison. The fuel and assembly hardware for the BUCCX remains at Sandia and is available for future experimentation. The critical experiment currently in operation at the SPRF is the Seven Percent Critical Experiment (7uPCX). This experiment is designed to provide benchmark

  11. KENO-IV code benchmark calculation, (6)

    International Nuclear Information System (INIS)

    Nomura, Yasushi; Naito, Yoshitaka; Yamakawa, Yasuhiro.

    1980-11-01

    A series of benchmark tests has been undertaken in JAERI in order to examine the capability of JAERI's criticality safety evaluation system consisting of the Monte Carlo calculation code KENO-IV and the newly developed multigroup constants library MGCL. The present report describes the results of a benchmark test using criticality experiments about Plutonium fuel in various shape. In all, 33 cases of experiments have been calculated for Pu(NO 3 ) 4 aqueous solution, Pu metal or PuO 2 -polystyrene compact in various shape (sphere, cylinder, rectangular parallelepiped). The effective multiplication factors calculated for the 33 cases distribute widely between 0.955 and 1.045 due to wide range of system variables. (author)

  12. Construction of a Benchmark for the User Experience Questionnaire (UEQ

    Directory of Open Access Journals (Sweden)

    Martin Schrepp

    2017-08-01

    Full Text Available Questionnaires are a cheap and highly efficient tool for achieving a quantitative measure of a product’s user experience (UX. However, it is not always easy to decide, if a questionnaire result can really show whether a product satisfies this quality aspect. So a benchmark is useful. It allows comparing the results of one product to a large set of other products. In this paper we describe a benchmark for the User Experience Questionnaire (UEQ, a widely used evaluation tool for interactive products. We also describe how the benchmark can be applied to the quality assurance process for concrete projects.

  13. Reactor group constants and benchmark test

    Energy Technology Data Exchange (ETDEWEB)

    Takano, Hideki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-08-01

    The evaluated nuclear data files such as JENDL, ENDF/B-VI and JEF-2 are validated by analyzing critical mock-up experiments for various type reactors and assessing applicability for nuclear characteristics such as criticality, reaction rates, reactivities, etc. This is called Benchmark Testing. In the nuclear calculations, the diffusion and transport codes use the group constant library which is generated by processing the nuclear data files. In this paper, the calculation methods of the reactor group constants and benchmark test are described. Finally, a new group constants scheme is proposed. (author)

  14. Experiences with installing and benchmarking SCALE 4.0 on workstations

    International Nuclear Information System (INIS)

    Montierth, L.M.; Briggs, J.B.

    1992-01-01

    The advent of economical, high-speed workstations has placed on the criticality engineer's desktop the means to perform computational analysis that was previously possible only on mainframe computers. With this capability comes the need to modify and maintain criticality codes for use on a variety of different workstations. Due to the use of nonstandard coding, compiler differences [in lieu of American National Standards Institute (ANSI) standards], and other machine idiosyncrasies, there is a definite need to systematically test and benchmark all codes ported to workstations. Once benchmarked, a user environment must be maintained to ensure that user code does not become corrupted. The goal in creating a workstation version of the criticality safety analysis sequence (CSAS) codes in SCALE 4.0 was to start with the Cray versions and change as little source code as possible yet produce as generic a code as possible. To date, this code has been ported to the IBM RISC 6000, Data General AViiON 400, Silicon Graphics 4D-35 (all using the same source code), and to the Hewlett Packard Series 700 workstations. The code is maintained under a configuration control procedure. In this paper, the authors address considerations that pertain to the installation and benchmarking of CSAS

  15. Burn-up TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Persic, A.; Ravnik, M.; Zagar, T.

    1998-01-01

    Different reactor codes are used for calculations of reactor parameters. The accuracy of the programs is tested through comparison of the calculated values with the experimental results. Well-defined and accurately measured benchmarks are required. The experimental results of reactivity measurements, fuel element reactivity worth distribution and fuel-up measurements are presented in this paper. The experiments were performed with partly burnt reactor core. The experimental conditions were well defined, so that the results can be used as a burn-up benchmark test case for a TRIGA Mark II reactor calculations.(author)

  16. Critical experiments at Sandia National Laboratories

    International Nuclear Information System (INIS)

    Harms, G.A.; Ford, J.T.; Barber, A.D.

    2011-01-01

    Sandia National Laboratories (SNL) has conducted radiation effects testing for the Department of Energy (DOE) and other contractors supporting the DOE since the 1960's. Over this period, the research reactor facilities at Sandia have had a primary mission to provide appropriate nuclear radiation environments for radiation testing and qualification of electronic components and other devices. The current generation of reactors includes the Annular Core Research Reactor (ACRR), a water-moderated pool-type reactor, fueled by elements constructed from UO2-BeO ceramic fuel pellets, and the Sandia Pulse Reactor III (SPR-III), a bare metal fast burst reactor utilizing a uranium-molybdenum alloy fuel. The SPR-III is currently defueled. The SPR Facility (SPRF) has hosted a series of critical experiments. A purpose-built critical experiment was first operated at the SPRF in the late 1980's. This experiment, called the Space Nuclear Thermal Propulsion Critical Experiment (CX), was designed to explore the reactor physics of a nuclear thermal rocket motor. This experiment was fueled with highly-enriched uranium carbide fuel in annular water-moderated fuel elements. The experiment program was completed and the fuel for the experiment was moved off-site. A second critical experiment, the Burnup Credit Critical Experiment (BUCCX) was operated at Sandia in 2002. The critical assembly for this experiment was based on the assembly used in the CX modified to accommodate low-enriched pin-type fuel in water moderator. This experiment was designed as a platform in which the reactivity effects of specific fission product poisons could be measured. Experiments were carried out on rhodium, an important fission product poison. The fuel and assembly hardware for the BUCCX remains at Sandia and is available for future experimentation. The critical experiment currently in operation at the SPRF is the Seven Percent Critical Experiment (7uPCX). This experiment is designed to provide benchmark

  17. Critical experiments at Sandia National Laboratories

    Energy Technology Data Exchange (ETDEWEB)

    Harms, G.A.; Ford, J.T.; Barber, A.D., E-mail: gaharms@sandia.gov [Sandia National Laboratories, Albuquerque, NM (United States)

    2011-07-01

    benchmark reactor physics data to support validation of the reactor physics codes used to design commercial reactor fuel elements in an enrichment range above the current 5% enrichment cap. A first set of critical experiments in the 7uPCX has been completed. More experiments are planned in the 7uPCX series. The critical experiments at Sandia National Laboratories are currently funded by the US Department of Energy Nuclear Criticality Safety Program (NCSP). The NCSP has committed to maintain the critical experiment capability at Sandia and to support the development of a critical experiments training course at the facility. The training course is intended to provide hands-on experiment experience for the training of new and re-training of practicing Nuclear Criticality Safety Engineers. The current plans are for the development of the course to continue through the first part of fiscal year 2011 with the development culminating is the delivery of a prototype of the course in the latter part of the fiscal year. The course will be available in fiscal year 2012. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  18. Recommended nuclear criticality safety experiments in support of the safe transportation of fissile material

    International Nuclear Information System (INIS)

    Tollefson, D.A.; Elliott, E.P.; Dyer, H.R.; Thompson, S.A.

    1993-01-01

    Validation of computer codes and nuclear data (cross-section) libraries using benchmark quality critical (or certain subcritical) experiments is an essential part of a nuclear criticality safety evaluation. The validation results establish the credibility of the calculational tools for use in evaluating a particular application. Validation of the calculational tools is addressed in several American National Standards Institute/American Nuclear Society (ANSI/ANS) standards, with ANSI/ANS-8.1 being the most relevant. Documentation of the validation is a required part of all safety analyses involving significant quantities of fissile materials. In the case of transportation of fissile materials, the safety analysis report for packaging (SARP) must contain a thorough discussion of benchmark experiments, detailing how the experiments relate to the significant packaging and contents materials (fissile, moderating, neutron absorbing) within the package. The experiments recommended in this paper are needed to address certain areas related to transportation of unirradiated fissile materials in drum-type containers (packagings) for which current data are inadequate or are lacking

  19. Catalog and history of the experiments of criticality Saclay (1958-1964) Valduc / Building 10 (1964-2003); Catalogue et historique des experiences de criticite Saclay (1958 - 1964) Valduc / Batiment 10 (1964-2003)

    Energy Technology Data Exchange (ETDEWEB)

    Poullot, G.; Dumont, V.; Anno, J.; Cousinou, P. [Institut de Radioprotection et de Surete Nucleaire (IRSN), 92 - Fontenay aux Roses (France); Grivot, P.; Girault, E.; Fouillaud, P.; Barbry, F. [CEA Valduc, 21 - Is-sur-Tille (France)

    2003-07-01

    The group ' International Criticality Safety Evaluation Benchmark evaluation project ' (I.C.S.B.E.P.) has for aim to supply to the international community experiments of benchmarks criticality, of certified quality, used to guarantee the qualification of criticality calculation codes. Have been defined: a structure of experiments classification, a format of standard presentation, a structure of work with evaluation, internal and external checks, presentation in plenary session. After favourable opinion of the work group, the synthesis document called evaluation is integrated to the general report I.C.S.B.E.P. (N.C.)

  20. Concrete benchmark experiment: ex-vessel LWR surveillance dosimetry; Experience ``Benchmark beton`` pour la dosimetrie hors cuve dans les reacteurs a eau legere

    Energy Technology Data Exchange (ETDEWEB)

    Ait Abderrahim, H.; D`Hondt, P.; Oeyen, J.; Risch, P.; Bioux, P.

    1993-09-01

    The analysis of DOEL-1 in-vessel and ex-vessel neutron dosimetry, using the DOT 3.5 Sn code coupled with the VITAMIN-C cross-section library, showed the same C/E values for different detectors at the surveillance capsule and the ex-vessel cavity positions. These results seem to be in contradiction with those obtained in several Benchmark experiments (PCA, PSF, VENUS...) when using the same computational tools. Indeed a strong decreasing radial trend of the C/E was observed, partly explained by the overestimation of the iron inelastic scattering. The flat trend seen in DOEL-1 could be explained by compensating errors in the calculation such as the backscattering due to the concrete walls outside the cavity. The `Concrete Benchmark` experiment has been designed to judge the ability of this calculation methods to treat the backscattering. This paper describes the `Concrete Benchmark` experiment, the measured and computed neutron dosimetry results and their comparison. This preliminary analysis seems to indicate an overestimation of the backscattering effect in the calculations. (authors). 5 figs., 1 tab., 7 refs.

  1. Assessment of the available {sup 233}U cross-section evaluations in the calculation of critical benchmark experiments

    Energy Technology Data Exchange (ETDEWEB)

    Leal, L.C.; Wright, R.Q.

    1996-10-01

    In this report we investigate the adequacy of the available {sup 233}U cross-section data for calculation of experimental critical systems. The {sup 233}U evaluations provided in two evaluated nuclear data libraries, the U.S. Data Bank [ENDF/B (Evaluated Nuclear Data Files)] and the Japanese Data Bank [JENDL (Japanese Evaluated Nuclear Data Library)] are examined. Calculations were performed for six thermal and ten fast experimental critical systems using the S{sub n} transport XSDRNPM code. To verify the performance of the {sup 233}U cross-section data for nuclear criticality safety application in which the neutron energy spectrum is predominantly in the epithermal energy range, calculations of four numerical benchmark systems with energy spectra in the intermediate energy range were done. These calculations serve only as an indication of the difference in calculated results that may be expected when the two {sup 233}U cross-section evaluations are used for problems with neutron spectra in the intermediate energy range. Additionally, comparisons of experimental and calculated central fission rate ratios were also made. The study has suggested that an ad hoc {sup 233}U evaluation based on the JENDL library provides better overall results for both fast and thermal experimental critical systems.

  2. Assessment of the Available (Sup 233)U Cross Sections Evaluations in the Calculation of Critical Benchmark Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Leal, L.C.

    1993-01-01

    In this report we investigate the adequacy of the available {sup 233}U cross-section data for calculation of experimental critical systems. The {sup 233}U evaluations provided in two evaluated nuclear data libraries, the U. S. Data Bank [ENDF/B (Evaluated Nuclear Data Files)] and the Japanese Data Bank [JENDL (Japanese Evaluated Nuclear Data Library)] are examined. Calculations were performed for six thermal and ten fast experimental critical systems using the Sn transport XSDRNPM code. To verify the performance of the {sup 233}U cross-section data for nuclear criticality safety application in which the neutron energy spectrum is predominantly in the epithermal energy range, calculations of four numerical benchmark systems with energy spectra in the intermediate energy range were done. These calculations serve only as an indication of the difference in calculated results that may be expected when the two {sup 233}U cross-section evaluations are used for problems with neutron spectra in the intermediate energy range. Additionally, comparisons of experimental and calculated central fission rate ratios were also made. The study has suggested that an ad hoc {sup 233}U evaluation based on the JENDL library provides better overall results for both fast and thermal experimental critical systems.

  3. Benchmark experiment to verify radiation transport calculations for dosimetry in radiation therapy; Benchmark-Experiment zur Verifikation von Strahlungstransportrechnungen fuer die Dosimetrie in der Strahlentherapie

    Energy Technology Data Exchange (ETDEWEB)

    Renner, Franziska [Physikalisch-Technische Bundesanstalt (PTB), Braunschweig (Germany)

    2016-11-01

    Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide.

  4. Classification of criticality calculations with correlation coefficient method and its application to OECD/NEA burnup credit benchmarks phase III-A and II-A

    International Nuclear Information System (INIS)

    Okuno, Hiroshi

    2003-01-01

    A method for classifying benchmark results of criticality calculations according to similarity was proposed in this paper. After formulation of the method utilizing correlation coefficients, it was applied to burnup credit criticality benchmarks Phase III-A and II-A, which were conducted by the Expert Group on Burnup Credit Criticality Safety under auspices of the Nuclear Energy Agency of the Organisation for Economic Cooperation and Development (OECD/NEA). Phase III-A benchmark was a series of criticality calculations for irradiated Boiling Water Reactor (BWR) fuel assemblies, whereas Phase II-A benchmark was a suite of criticality calculations for irradiated Pressurized Water Reactor (PWR) fuel pins. These benchmark problems and their results were summarized. The correlation coefficients were calculated and sets of benchmark calculation results were classified according to the criterion that the values of the correlation coefficients were no less than 0.15 for Phase III-A and 0.10 for Phase II-A benchmarks. When a couple of benchmark calculation results belonged to the same group, one calculation result was found predictable from the other. An example was shown for each of the Benchmarks. While the evaluated nuclear data seemed the main factor for the classification, further investigations were required for finding other factors. (author)

  5. Russian nuclear criticality experiments. Status and prospects

    International Nuclear Information System (INIS)

    Gagarinski, A.Yu.

    2003-01-01

    calculation models will not exceed the calculation uncertainties characteristic for present-day codes. The second type is represented by well-described experiments on critical assemblies with more or less simple geometry where uncertainties in determining the criticality mainly due to inexact knowledge of the material characteristics and geometry are relatively small, which makes these experiments suitable for validation of the most powerful (for the given period) criticality calculation methods over a broad range of parameters. These so-called benchmark data represent the 'golden fund' for emerging tasks. (author)

  6. Critical Assessment of Metagenome Interpretation-a benchmark of metagenomics software.

    Science.gov (United States)

    Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter; Koslicki, David; Janssen, Stefan; Dröge, Johannes; Gregor, Ivan; Majda, Stephan; Fiedler, Jessika; Dahms, Eik; Bremges, Andreas; Fritz, Adrian; Garrido-Oter, Ruben; Jørgensen, Tue Sparholt; Shapiro, Nicole; Blood, Philip D; Gurevich, Alexey; Bai, Yang; Turaev, Dmitrij; DeMaere, Matthew Z; Chikhi, Rayan; Nagarajan, Niranjan; Quince, Christopher; Meyer, Fernando; Balvočiūtė, Monika; Hansen, Lars Hestbjerg; Sørensen, Søren J; Chia, Burton K H; Denis, Bertrand; Froula, Jeff L; Wang, Zhong; Egan, Robert; Don Kang, Dongwan; Cook, Jeffrey J; Deltel, Charles; Beckstette, Michael; Lemaitre, Claire; Peterlongo, Pierre; Rizk, Guillaume; Lavenier, Dominique; Wu, Yu-Wei; Singer, Steven W; Jain, Chirag; Strous, Marc; Klingenberg, Heiner; Meinicke, Peter; Barton, Michael D; Lingner, Thomas; Lin, Hsin-Hung; Liao, Yu-Chieh; Silva, Genivaldo Gueiros Z; Cuevas, Daniel A; Edwards, Robert A; Saha, Surya; Piro, Vitor C; Renard, Bernhard Y; Pop, Mihai; Klenk, Hans-Peter; Göker, Markus; Kyrpides, Nikos C; Woyke, Tanja; Vorholt, Julia A; Schulze-Lefert, Paul; Rubin, Edward M; Darling, Aaron E; Rattei, Thomas; McHardy, Alice C

    2017-11-01

    Methods for assembly, taxonomic profiling and binning are key to interpreting metagenome data, but a lack of consensus about benchmarking complicates performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on highly complex and realistic data sets, generated from ∼700 newly sequenced microorganisms and ∼600 novel viruses and plasmids and representing common experimental setups. Assembly and genome binning programs performed well for species represented by individual genomes but were substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below family level. Parameter settings markedly affected performance, underscoring their importance for program reproducibility. The CAMI results highlight current challenges but also provide a roadmap for software selection to answer specific research questions.

  7. Benchmarking criticality analysis of TRIGA fuel storage racks.

    Science.gov (United States)

    Robinson, Matthew Loren; DeBey, Timothy M; Higginbotham, Jack F

    2017-01-01

    A criticality analysis was benchmarked to sub-criticality measurements of the hexagonal fuel storage racks at the United States Geological Survey TRIGA MARK I reactor in Denver. These racks, which hold up to 19 fuel elements each, are arranged at 0.61m (2 feet) spacings around the outer edge of the reactor. A 3-dimensional model was created of the racks using MCNP5, and the model was verified experimentally by comparison to measured subcritical multiplication data collected in an approach to critical loading of two of the racks. The validated model was then used to show that in the extreme condition where the entire circumference of the pool was lined with racks loaded with used fuel the storage array is subcritical with a k value of about 0.71; well below the regulatory limit of 0.8. A model was also constructed of the rectangular 2×10 fuel storage array used in many other TRIGA reactors to validate the technique against the original TRIGA licensing sub-critical analysis performed in 1966. The fuel used in this study was standard 20% enriched (LEU) aluminum or stainless steel clad TRIGA fuel. Copyright © 2016. Published by Elsevier Ltd.

  8. Re-evaluation of the criticality experiments of the ''Otto Hahn Nuclear Ship'' reactor

    Energy Technology Data Exchange (ETDEWEB)

    Lengar, I.; Snoj, L.; Rogan, P.; Ravnik, M. [Jozef Stefan Institute, Ljubljana (Slovenia)

    2008-11-15

    Several series of experiments with a FDR reactor (advanced pressurized light water reactor) were performed in 1972 in the Geesthacht critical facility ANEX. The experiments were performed to test the core prior to its usage for the propulsion of the first German nuclear merchant ship ''Otto-Hahn''. In the present paper a calculational re-evaluation of the experiments is described with the use of the up-to date computer codes (Monte-Carlo code MCNP5) and nuclear data (ENDF/B-VI release 6). It is focused on the determination of uncertainties in the benchmark model of the experimental set-up, originating mainly from the limited set of information still available about the experiments. Effects of the identified uncertainties on the multiplication factor were studied. The sensitivity studies include parametric variation of material composition and geometry. The combined total uncertainty being found 0.0050 in k{sub eff}, the experiments are qualified as criticality safety benchmark experiments. (orig.)

  9. Current Reactor Physics Benchmark Activities at the Idaho National Laboratory

    International Nuclear Information System (INIS)

    Bess, John D.; Marshall, Margaret A.; Gorham, Mackenzie L.; Christensen, Joseph; Turnbull, James C.; Clark, Kim

    2011-01-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) (1) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) (2) were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.

  10. U.S. integral and benchmark experiments

    International Nuclear Information System (INIS)

    Maienschein, F.C.

    1978-01-01

    Verification of methods for analysis of radiation-transport (shielding) problems in Liquid-Metal Fast Breeder Reactors has required a series of experiments that can be classified as benchmark, parametric, or design-confirmation experiments. These experiments, performed at the Oak Ridge Tower Shielding Facility, have included measurements of neutron transport in bulk shields of sodium, steel, and inconel and in configurations that simulate lower axial shields, pipe chases, and top-head shields. They have also included measurements of the effects of fuel stored within the reactor vessel and of gamma-ray energy deposition (heating). The paper consists of brief comments on these experiments, and also on a recent experiment in which neutron streaming problems in a Gas-Cooled Fast Breeder Reactor were studied. The need for additional experiments for a few areas of LMFBR shielding is also cited

  11. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1

    International Nuclear Information System (INIS)

    Van Der Marck, S. C.

    2012-01-01

    Three nuclear data libraries have been tested extensively using criticality safety benchmark calculations. The three libraries are the new release of the US library ENDF/B-VII.1 (2011), the new release of the Japanese library JENDL-4.0 (2011), and the OECD/NEA library JEFF-3.1 (2006). All calculations were performed with the continuous-energy Monte Carlo code MCNP (version 4C3, as well as version 6-beta1). Around 2000 benchmark cases from the International Handbook of Criticality Safety Benchmark Experiments (ICSBEP) were used. The results were analyzed per ICSBEP category, and per element. Overall, the three libraries show similar performance on most criticality safety benchmarks. The largest differences are probably caused by elements such as Be, C, Fe, Zr, W. (authors)

  12. Links among available integral benchmarks and differential date evaluations, computational biases and uncertainties, and nuclear criticality safety biases on potential MOX production throughput

    International Nuclear Information System (INIS)

    Goluoglu, S.; Hopper, C.M.

    2004-01-01

    Through the use of Oak Ridge National Laboratory's recently developed and applied sensitivity and uncertainty computational analysis techniques, this paper presents the relevance and importance of available and needed integral benchmarks and differential data evaluations impacting potential MOX production throughput determinations relative to low-moderated MOX fuel blending operations. The relevance and importance in the availability of or need for critical experiment benchmarks and data evaluations are presented in terms of computational biases as influenced by computational and experimental sensitivities and uncertainties relative to selected MOX production powder blending processes. Recent developments for estimating the safe margins of subcriticality for assuring nuclear criticality safety for process approval are presented. In addition, the impact of the safe margins (due to computational biases and uncertainties) on potential MOX production throughput will also be presented. (author)

  13. Influence of the ab initio n–d cross sections in the critical heavy-water benchmarks

    International Nuclear Information System (INIS)

    Morillon, B.; Lazauskas, R.; Carbonell, J.

    2013-01-01

    Highlights: ► We solve the three nucleon problem using different NN potential (MT, AV18 and INOY) to calculate the Neutron–deuteron cross sections. ► These cross sections are compared to the existing experimental data and to international libraries. ► We describe the different sets of heavy water benchmarks for which the Monte Carlo simulations have been performed including our new Neutron–deuteron cross sections. ► The results obtained by the ab initio INOY potential have been compared with the calculations based on the international library cross sections and are found to be of the same quality. - Abstract: The n–d elastic and breakup cross sections are computed by solving the three-body Faddeev equations for realistic and semi-realistic nucleon–nucleon potentials. These cross sections are inserted in the Monte Carlo simulation of the nuclear processes considered in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook). The results obtained using thes ab initio n–d cross sections are compared with those provided by the most renown international libraries

  14. OECD/NEA burnup credit calculational criticality benchmark Phase I-B results

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, M.D.; Parks, C.V. [Oak Ridge National Lab., TN (United States); Brady, M.C. [Sandia National Labs., Las Vegas, NV (United States)

    1996-06-01

    In most countries, criticality analysis of LWR fuel stored in racks and casks has assumed that the fuel is fresh with the maximum allowable initial enrichment. This assumption has led to the design of widely spaced and/or highly poisoned storage and transport arrays. If credit is assumed for fuel burnup, initial enrichment limitations can be raised in existing systems, and more compact and economical arrays can be designed. Such reliance on the reduced reactivity of spent fuel for criticality control is referred to as burnup credit. The Burnup Credit Working Group, formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development, has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods agree to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods agree within 11% about the average for all fission products studied. Most deviations are less than 10%, and many are less than 5%. The exceptions are Sm 149, Sm 151, and Gd 155.

  15. OECD/NEA burnup credit calculational criticality benchmark Phase I-B results

    International Nuclear Information System (INIS)

    DeHart, M.D.; Parks, C.V.; Brady, M.C.

    1996-06-01

    In most countries, criticality analysis of LWR fuel stored in racks and casks has assumed that the fuel is fresh with the maximum allowable initial enrichment. This assumption has led to the design of widely spaced and/or highly poisoned storage and transport arrays. If credit is assumed for fuel burnup, initial enrichment limitations can be raised in existing systems, and more compact and economical arrays can be designed. Such reliance on the reduced reactivity of spent fuel for criticality control is referred to as burnup credit. The Burnup Credit Working Group, formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development, has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods agree to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods agree within 11% about the average for all fission products studied. Most deviations are less than 10%, and many are less than 5%. The exceptions are Sm 149, Sm 151, and Gd 155

  16. Calculational assessment of critical experiments with mixed-oxide fuel pin arrays moderated by organic solution

    International Nuclear Information System (INIS)

    Smolen, G.R.; Funabashi, H.

    1987-01-01

    Critical experiments have been conducted with organically moderated mixed-oxide (MOX) fuel pin assemblies at the Pacific Northwest Lab. Critical Mass Lab. These experiments are part of a joint exchange program between the US Dept. of Energy and the Power Reactor and Nuclear Fuel Development Corp. of Japan in the area of criticality data development. The purpose of these experiments is to benchmark computer codes and cross-section libraries and to assess the reactivity difference between systems moderated by water and those moderated by an organic solution. Past studies have indicated that some organic mixtures may be better moderators than water. This topic is of particular importance to the criticality safety of fuel processing plants where fissile material is dissolved in organic solutions during the solvent extraction process. In the past, it has been assumed that the codes and libraries benchmarked with water-moderated experiments were adequate when performing design and licensing studies of organically moderated systems. Calculations presented in this paper indicated that the Scale code system and the 27-energy-group cross-section library accurately compute k/sub eff/ for organically moderated MOX fuel pin assemblies. Furthermore, the reactivity of an organic solution with a 32 vol % TBP/68 vol% NPH mixture in a heterogeneous configuration is the same, for practical purposes, as water

  17. Calculational assessment of critical experiments with mixed oxide fuel pin arrays moderated by organic solution

    International Nuclear Information System (INIS)

    Smolen, G.R.

    1987-01-01

    Critical experiments have been conducted with organic-moderated mixed oxide (MOX) fuel pin assemblies at the Pacific Northwest Laboratory (PNL) Critical Mass Laboratory (CML). These experiments are part of a joint exchange program between the United States Department of Energy (USDOE) and the Power Reactor and Nuclear Fuel Development Corporation (PNC) of Japan in the area of criticality data development. The purpose of these experiments is to benchmark computer codes and cross-section libraries and to assess the reactivity difference between systems moderated by water and those moderated by an organic solution. Past studies have indicated that some organic mixtures may be better moderators than water. This topic is of particular importance to the criticality safety of fuel processing plants where fissile material is dissolved in organic solutions during the solvent extraction process. In the past, it has been assumed that the codes and libraries benchmarked with water-moderated experiments were adequate when performing design and licensing studies of organic-moderated systems. Calculations presented in this paper indicated that the SCALE code system and the 27-energy-group cross-section accurately compute k-effectives for organic moderated MOX fuel-pin assemblies. Furthermore, the reactivity of an organic solution with a 32-vol-% TBP/68-vol-% NPH mixture in a heterogeneous configuration is the same, for practical purposes, as water. 5 refs

  18. Development of an ICSBEP Benchmark Evaluation, Nearly 20 Years of Experience

    International Nuclear Information System (INIS)

    Briggs, J. Blair; Bess, John D.

    2011-01-01

    The basic structure of all ICSBEP benchmark evaluations is essentially the same and includes (1) a detailed description of the experiment; (2) an evaluation of the experiment, including an exhaustive effort to quantify the effects of uncertainties on measured quantities; (3) a concise presentation of benchmark-model specifications; (4) sample calculation results; and (5) a summary of experimental references. Computer code input listings and other relevant information are generally preserved in appendixes. Details of an ICSBEP evaluation is presented.

  19. Criticality benchmark results for the ENDF60 library with MCNP trademark

    International Nuclear Information System (INIS)

    Keen, N.D.; Frankle, S.C.; MacFarlane, R.E.

    1995-01-01

    The continuous-energy neutron data library ENDF60, for use with the Monte Carlo N-Particle radiation transport code MCNP4A, was released in the fall of 1994. The ENDF60 library is comprised of 124 nuclide data files based on the ENDF/B-VI (B-VI) evaluations through Release 2. Fifty-two percent of these B-VI evaluations are translations from ENDF/B-V (B-V). The remaining forty-eight percent are new evaluations which have sometimes changed significantly. Among these changes are greatly increased use of isotopic evaluations, more extensive resonance-parameter evaluations, and energy-angle correlated distributions for secondary particles. In particular, the upper energy limit for the resolved resonance region of 235 U, 238 U and 239 Pu has been extended from 0.082, 4.0, and 0.301 keV to 2..25, 10.0, and 2.5 keV respectively. As regulatory oversight has advanced and performing critical experiments has become more difficult, there has been an increased reliance on computational methods. For the criticality safety community, the performance of the combined transport code and data library is of interest. The purpose of this abstract is to provide benchmarking results to aid the user in determining the best data library for their application

  20. Proceedings of the workshop on integral experiment covariance data for critical safety validation

    Energy Technology Data Exchange (ETDEWEB)

    Stuke, Maik (ed.)

    2016-04-15

    For some time, attempts to quantify the statistical dependencies of critical experiments and to account for them properly in validation procedures were discussed in the literature by various groups. Besides the development of suitable methods especially the quality and modeling issues of the freely available experimental data are in the focus of current discussions, carried out for example in the Expert Group on Uncertainty Analysis for Criticality Safety Assessment (UACSA) of the OECD-NEA Nuclear Science Committee. The same committee compiles and publishes also the freely available experimental data in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. Most of these experiments were performed as series and might share parts of experimental setups leading to correlated results. The quality of the determination of these correlations and the underlying covariance data depend strongly on the quality of the documentation of experiments.

  1. Proceedings of the workshop on integral experiment covariance data for critical safety validation

    International Nuclear Information System (INIS)

    Stuke, Maik

    2016-04-01

    For some time, attempts to quantify the statistical dependencies of critical experiments and to account for them properly in validation procedures were discussed in the literature by various groups. Besides the development of suitable methods especially the quality and modeling issues of the freely available experimental data are in the focus of current discussions, carried out for example in the Expert Group on Uncertainty Analysis for Criticality Safety Assessment (UACSA) of the OECD-NEA Nuclear Science Committee. The same committee compiles and publishes also the freely available experimental data in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. Most of these experiments were performed as series and might share parts of experimental setups leading to correlated results. The quality of the determination of these correlations and the underlying covariance data depend strongly on the quality of the documentation of experiments.

  2. Design of a pre-collimator system for neutronics benchmark experiment

    International Nuclear Information System (INIS)

    Cai Xinggang; Liu Jiantao; Nie Yangbo; Bao Jie; Ruan Xichao; Lu Yanxia

    2013-01-01

    Benchmark experiment is an important means to inspect the reliability and accuracy of the evaluated nuclear data, the effect/background ratios are the important parameters to weight the quality of experimental data. In order to obtain higher effect/background ratios, a pre-collimator system was designed for benchmark experiment. This system mainly consists of a pre-collimator and a shadow cone, The MCNP-4C code was used to simulate the background spectra under various conditions, from the results we found that with the pre-collimator system have a very marked improvement in the effect/background ratios. (authors)

  3. TRX and UO2 criticality benchmarks with SAM-CE

    International Nuclear Information System (INIS)

    Beer, M.; Troubetzkoy, E.S.; Lichtenstein, H.; Rose, P.F.

    1980-01-01

    A set of thermal reactor benchmark calculations with SAM-CE which have been conducted at both MAGI and at BNL are described. Their purpose was both validation of the SAM-CE reactor eigenvalue capability developed by MAGI and a substantial contribution to the data testing of both ENDF/B-IV and ENDF/B-V libraries. This experience also resulted in increased calculational efficiency of the code and an example is given. The benchmark analysis included the TRX-1 infinite cell using both ENDF/B-IV and ENDF/B-V cross section sets and calculations using ENDF/B-IV of the TRX-1 full core and TRX-2 cell. BAPL-UO2-1 calculations were conducted for the cell using both ENDF/B-IV and ENDF/B-V and for the full core with ENDF/B-V

  4. Sensitivity analysis of critical experiments with evaluated nuclear data libraries

    International Nuclear Information System (INIS)

    Fujiwara, D.; Kosaka, S.

    2008-01-01

    Criticality benchmark testing was performed with evaluated nuclear data libraries for thermal, low-enriched uranium fuel rod applications. C/E values for k eff were calculated with the continuous-energy Monte Carlo code MVP2 and its libraries generated from Endf/B-VI.8, Endf/B-VII.0, JENDL-3.3 and JEFF-3.1. Subsequently, the observed k eff discrepancies between libraries were decomposed to specify the source of difference in the nuclear data libraries using sensitivity analysis technique. The obtained sensitivity profiles are also utilized to estimate the adequacy of cold critical experiments to the boiling water reactor under hot operating condition. (authors)

  5. Critical Assessment of Metagenome Interpretation – a benchmark of computational metagenomics software

    Science.gov (United States)

    Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter; Koslicki, David; Janssen, Stefan; Dröge, Johannes; Gregor, Ivan; Majda, Stephan; Fiedler, Jessika; Dahms, Eik; Bremges, Andreas; Fritz, Adrian; Garrido-Oter, Ruben; Jørgensen, Tue Sparholt; Shapiro, Nicole; Blood, Philip D.; Gurevich, Alexey; Bai, Yang; Turaev, Dmitrij; DeMaere, Matthew Z.; Chikhi, Rayan; Nagarajan, Niranjan; Quince, Christopher; Meyer, Fernando; Balvočiūtė, Monika; Hansen, Lars Hestbjerg; Sørensen, Søren J.; Chia, Burton K. H.; Denis, Bertrand; Froula, Jeff L.; Wang, Zhong; Egan, Robert; Kang, Dongwan Don; Cook, Jeffrey J.; Deltel, Charles; Beckstette, Michael; Lemaitre, Claire; Peterlongo, Pierre; Rizk, Guillaume; Lavenier, Dominique; Wu, Yu-Wei; Singer, Steven W.; Jain, Chirag; Strous, Marc; Klingenberg, Heiner; Meinicke, Peter; Barton, Michael; Lingner, Thomas; Lin, Hsin-Hung; Liao, Yu-Chieh; Silva, Genivaldo Gueiros Z.; Cuevas, Daniel A.; Edwards, Robert A.; Saha, Surya; Piro, Vitor C.; Renard, Bernhard Y.; Pop, Mihai; Klenk, Hans-Peter; Göker, Markus; Kyrpides, Nikos C.; Woyke, Tanja; Vorholt, Julia A.; Schulze-Lefert, Paul; Rubin, Edward M.; Darling, Aaron E.; Rattei, Thomas; McHardy, Alice C.

    2018-01-01

    In metagenome analysis, computational methods for assembly, taxonomic profiling and binning are key components facilitating downstream biological data interpretation. However, a lack of consensus about benchmarking datasets and evaluation metrics complicates proper performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on datasets of unprecedented complexity and realism. Benchmark metagenomes were generated from ~700 newly sequenced microorganisms and ~600 novel viruses and plasmids, including genomes with varying degrees of relatedness to each other and to publicly available ones and representing common experimental setups. Across all datasets, assembly and genome binning programs performed well for species represented by individual genomes, while performance was substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below the family level. Parameter settings substantially impacted performances, underscoring the importance of program reproducibility. While highlighting current challenges in computational metagenomics, the CAMI results provide a roadmap for software selection to answer specific research questions. PMID:28967888

  6. Regional Competitive Intelligence: Benchmarking and Policymaking

    OpenAIRE

    Huggins , Robert

    2010-01-01

    Benchmarking exercises have become increasingly popular within the sphere of regional policymaking in recent years. The aim of this paper is to analyse the concept of regional benchmarking and its links with regional policymaking processes. It develops a typology of regional benchmarking exercises and regional benchmarkers, and critically reviews the literature, both academic and policy oriented. It is argued that critics who suggest regional benchmarking is a flawed concept and technique fai...

  7. RB reactor as the U-D2O benchmark criticality system

    International Nuclear Information System (INIS)

    Pesic, M.

    1998-01-01

    From a rich and valuable database fro 580 different reactor cores formed up to now in the RB nuclear reactor, a selected and well recorded set is carefully chosen and preliminarily proposed as a new uranium-heavy water benchmark criticality system for validation od reactor design computer codes and data libraries. The first results of validation of the MCNP code and adjoining neutron cross section libraries are resented in this paper. (author)

  8. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    International Nuclear Information System (INIS)

    Bess, John D.; Montierth, Leland; Köberl, Oliver

    2014-01-01

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the 235 U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of k eff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of k eff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments

  9. INTEGRAL BENCHMARK DATA FOR NUCLEAR DATA TESTING THROUGH THE ICSBEP AND THE NEWLY ORGANIZED IRPHEP

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-04-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) was last reported in a nuclear data conference at the International Conference on Nuclear Data for Science and Technology, ND-2004, in Santa Fe, New Mexico. Since that time the number and type of integral benchmarks have increased significantly. Included in the ICSBEP Handbook are criticality-alarm / shielding and fundamental physic benchmarks in addition to the traditional critical / subcritical benchmark data. Since ND 2004, a reactor physics counterpart to the ICSBEP, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. The IRPhEP is patterned after the ICSBEP, but focuses on other integral measurements, such as buckling, spectral characteristics, reactivity effects, reactivity coefficients, kinetics measurements, reaction-rate and power distributions, nuclide compositions, and other miscellaneous-type measurements in addition to the critical configuration. The status of these two projects is discussed and selected benchmarks highlighted in this paper.

  10. Benchmark experiment on vanadium assembly with D-T neutrons. In-situ measurement

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio; Kasugai, Yoshimi; Konno, Chikara; Wada, Masayuki; Oyama, Yukio; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Murata, Isao; Kokooo; Takahashi, Akito

    1998-03-01

    Fusion neutronics benchmark experimental data on vanadium were obtained for neutrons in almost entire energies as well as secondary gamma-rays. Benchmark calculations for the experiment were performed to investigate validity of recent nuclear data files, i.e., JENDL Fusion File, FENDL/E-1.0 and EFF-3. (author)

  11. Overview of the 2014 Edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook)

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess; J. Blair Briggs; Jim Gulliford; Ian Hill

    2014-10-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) is a widely recognized world class program. The work of the IRPhEP is documented in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Integral data from the IRPhEP Handbook is used by reactor safety and design, nuclear data, criticality safety, and analytical methods development specialists, worldwide, to perform necessary validations of their calculational techniques. The IRPhEP Handbook is among the most frequently quoted reference in the nuclear industry and is expected to be a valuable resource for future decades.

  12. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    International Nuclear Information System (INIS)

    Bess, John D.; Fujimoto, Nozomu

    2014-01-01

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments

  13. Review of recent benchmark experiments on integral test for high energy nuclear data evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Nakashima, Hiroshi; Tanaka, Susumu; Konno, Chikara; Fukahori, Tokio; Hayashi, Katsumi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-11-01

    A survey work of recent benchmark experiments on an integral test for high energy nuclear data evaluation was carried out as one of the work of the Task Force on JENDL High Energy File Integral Evaluation (JHEFIE). In this paper the results are compiled and the status of recent benchmark experiments is described. (author)

  14. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  15. Critical experiments with 4.31 wt % 235U-enriched UO2 rods in highly borated water lattices

    International Nuclear Information System (INIS)

    Durst, B.M.; Bierman, S.R.; Clayton, E.D.

    1982-08-01

    A series of critical experiments were performed with 4.31 wt % 235 U enriched UO 2 fuel rods immersed in water containing various concentrations of boron ranging up to 2.55 g/l. The boron was added in the form of boric acid (H 3 BO 3 ). Critical experimental data were obtained for two different lattice pitches wherein the water-to-uranium oxide volume ratios were 1.59 and 1.09. The experiments provide benchmarks on heavily borated systems for use in validating calculational techniques employed in analyzing fuel shipping casks and spent fuel storage systems that may utilize boron for criticality control

  16. Benchmark Evaluation of Plutonium Nitrate Solution Arrays

    International Nuclear Information System (INIS)

    Marshall, M.A.; Bess, J.D.

    2011-01-01

    In October and November of 1981 thirteen approach-to-critical experiments were performed on a remote split table machine (RSTM) in the Critical Mass Laboratory of Pacific Northwest Laboratory (PNL) in Richland, Washington, using planar arrays of polyethylene bottles filled with plutonium (Pu) nitrate solution. Arrays of up to sixteen bottles were used to measure the critical number of bottles and critical array spacing with a tight fitting Plexiglas(reg s ign) reflector on all sides of the arrays except the top. Some experiments used Plexiglas shells fitted around each bottles to determine the effect of moderation on criticality. Each bottle contained approximately 2.4 L of Pu(NO3)4 solution with a Pu content of 105 g Pu/L and a free acid molarity H+ of 5.1. The plutonium was of low 240Pu (2.9 wt.%) content. These experiments were performed to fill a gap in experimental data regarding criticality limits for storing and handling arrays of Pu solution in reprocessing facilities. Of the thirteen approach-to-critical experiments eleven resulted in extrapolations to critical configurations. Four of the approaches were extrapolated to the critical number of bottles; these were not evaluated further due to the large uncertainty associated with the modeling of a fraction of a bottle. The remaining seven approaches were extrapolated to critical array spacing of 3-4 and 4-4 arrays; these seven critical configurations were evaluation for inclusion as acceptable benchmark experiments in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Detailed and simple models of these configurations were created and the associated bias of these simplifications was determined to range from 0.00116 and 0.00162 ± 0.00006 ?keff. Monte Carlo analysis of all models was completed using MCNP5 with ENDF/BVII.0 neutron cross section libraries. A thorough uncertainty analysis of all critical, geometric, and material parameters was performed using parameter

  17. The difference between traditional experiments and CFD validation benchmark experiments

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Barton L., E-mail: barton.smith@usu.edu

    2017-02-15

    Computation Fluid Dynamics provides attractive features for design, and perhaps licensing, of nuclear power plants. The most important of these features is low cost compared to experiments. However, uncertainty of CFD calculations must accompany these calculations in order for the results to be useful for important decision making. In order to properly assess the uncertainty of a CFD calculation, it must be “validated” against experimental data. Unfortunately, traditional “discovery” experiments are normally ill-suited to provide all of the information necessary for the validation exercise. Traditionally, experiments are performed to discover new physics, determine model parameters, or to test designs. This article will describe a new type of experiment; one that is designed and carried out with the specific purpose of providing Computational Fluid Dynamics (CFD) validation benchmark data. We will demonstrate that the goals of traditional experiments and validation experiments are often in conflict, making use of traditional experimental results problematic and leading directly to larger predictive uncertainty of the CFD model.

  18. The difference between traditional experiments and CFD validation benchmark experiments

    International Nuclear Information System (INIS)

    Smith, Barton L.

    2017-01-01

    Computation Fluid Dynamics provides attractive features for design, and perhaps licensing, of nuclear power plants. The most important of these features is low cost compared to experiments. However, uncertainty of CFD calculations must accompany these calculations in order for the results to be useful for important decision making. In order to properly assess the uncertainty of a CFD calculation, it must be “validated” against experimental data. Unfortunately, traditional “discovery” experiments are normally ill-suited to provide all of the information necessary for the validation exercise. Traditionally, experiments are performed to discover new physics, determine model parameters, or to test designs. This article will describe a new type of experiment; one that is designed and carried out with the specific purpose of providing Computational Fluid Dynamics (CFD) validation benchmark data. We will demonstrate that the goals of traditional experiments and validation experiments are often in conflict, making use of traditional experimental results problematic and leading directly to larger predictive uncertainty of the CFD model.

  19. Experiments for IFR fuel criticality in ZPPR-21

    International Nuclear Information System (INIS)

    Olsen, D.N.; Collins, P.J.; Carpenter, S.G.

    1991-01-01

    A series of benchmark measurements was made in ZPPR-21 to validate criticality calculations for fuel processing operations for Argonne's Integral Fast Reactor program. Six different mixtures of Pu/U/Zr fuel with a graphite reflector were built and criticality was determined by period measurements. The assemblies were isolated from room return neutrons by a lithium hydride shield. Analysis was done using a fully-detailed model with the VIM Monte Carlo code and ENDF/B-V.2 data. Sensitivity analysis was used to validate the measurements against other benchmark data. A simple RZ model was defined and used with the KENO code. Corrections to the RZ model were provided by the VIM calculations with low statistical uncertainty. (Author)

  20. Analysis on First Criticality Benchmark Calculation of HTR-10 Core

    International Nuclear Information System (INIS)

    Zuhair; Ferhat-Aziz; As-Natio-Lasman

    2000-01-01

    HTR-10 is a graphite-moderated and helium-gas cooled pebble bed reactor with an average helium outlet temperature of 700 o C and thermal power of 10 MW. The first criticality benchmark problem of HTR-10 in this paper includes the loading number calculation of nuclear fuel in the form of UO 2 ball with U-235 enrichment of 17% for the first criticality under the helium atmosphere and core temperature of 20 o C, and the effective multiplication factor (k eff ) calculation of full core (5 m 3 ) under the helium atmosphere and various core temperatures. The group constants of fuel mixture, moderator and reflector materials were generated with WlMS/D4 using spherical model and 4 neutron energy group. The critical core height of 150.1 cm obtained from CITATION in 2-D R-Z reactor geometry exists in the calculation range of INET China, JAERI Japan and BATAN Indonesia, and OKBM Russia. The k eff calculation result of full core at various temperatures shows that the HTR-10 has negative temperature coefficient of reactivity. (author)

  1. Two-dimensional benchmark calculations for PNL-30 through PNL-35

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1997-01-01

    Interest in critical experiments with lattices of mixed-oxide (MOX) fuel pins has been revived by the possibility that light water reactors will be used for disposition of weapons-grade plutonium. A series of six experiments with MOX lattices, designated PNL-30 through PNL-35, was performed at Pacific Northwest Laboratories in 1975 and 1976, and a set of benchmark specifications for these experiments subsequently was adopted by the Cross Section Evaluation Working Group (CSEWG). Although there appear to be some problems with these experiments, they remain the only CSEWG benchmarks for MOX lattices. The number of fuel pins in these experiments is relatively low, corresponding to fewer than 4 typical pressurized-water-reactor fuel assemblies. Accordingly, they are more appropriate as benchmarks for lattice-physics codes than for reactor-core simulator codes. Unfortunately, the CSEWG specifications retain the full three-dimensional (3D) detail of the experiments, while lattice-physics codes almost universally are limited to two dimensions (2D). This paper proposes an extension of the benchmark specifications to include a 2D model, and it justifies that extension by comparing results from the MCNP Monte Carlo code for the 2D and 3D specifications

  2. Summary of ORSphere critical and reactor physics measurements

    Directory of Open Access Journals (Sweden)

    Marshall Margaret A.

    2017-01-01

    Full Text Available In the early 1970s Dr. John T. Mihalczo (team leader, J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF with highly enriched uranium (HEU metal (called Oak Ridge Alloy or ORALLOY to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with the GODIVA I experiments. This critical configuration has been evaluated. Preliminary results were presented at ND2013. Since then, the evaluation was finalized and judged to be an acceptable benchmark experiment for the International Criticality Safety Benchmark Experiment Project (ICSBEP. Additionally, reactor physics measurements were performed to determine surface button worths, central void worth, delayed neutron fraction, prompt neutron decay constant, fission density and neutron importance. These measurements have been evaluated and found to be acceptable experiments and are discussed in full detail in the International Handbook of Evaluated Reactor Physics Benchmark Experiments. The purpose of this paper is to summarize all the evaluated critical and reactor physics measurements evaluations.

  3. Summary of ORSphere critical and reactor physics measurements

    Science.gov (United States)

    Marshall, Margaret A.; Bess, John D.

    2017-09-01

    In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with the GODIVA I experiments. This critical configuration has been evaluated. Preliminary results were presented at ND2013. Since then, the evaluation was finalized and judged to be an acceptable benchmark experiment for the International Criticality Safety Benchmark Experiment Project (ICSBEP). Additionally, reactor physics measurements were performed to determine surface button worths, central void worth, delayed neutron fraction, prompt neutron decay constant, fission density and neutron importance. These measurements have been evaluated and found to be acceptable experiments and are discussed in full detail in the International Handbook of Evaluated Reactor Physics Benchmark Experiments. The purpose of this paper is to summarize all the evaluated critical and reactor physics measurements evaluations.

  4. Production of neutron cross section library based on JENDL-4.0 to continuous-energy Monte Carlo code MVP and its application to criticality analysis of benchmark problems in the ICSBEP handbook

    International Nuclear Information System (INIS)

    Okumura, Keisuke; Nagaya, Yasunobu

    2011-09-01

    In May 2010, JENDL-4.0 was released from Japan Atomic Energy Agency as the updated Japanese Nuclear Data Library. It was processed by the nuclear data processing system LICEM and an arbitrary-temperature neutron cross section library MVPlib - nJ40 was produced for the neutron and photon transport calculation code MVP based on the continuous-energy Monte Carlo method. The library contains neutron cross sections for 406 nuclides on the free gas model, thermal scattering cross sections, and cross sections of pseudo fission products for burn-up calculations with MVP. Criticality benchmark calculations were carried out with MVP and MVPlib - nJ40 for about 1,000 cases of critical experiments stored in the hand book of International Criticality Safety Benchmark Evaluation Project (ICSBEP), which covers a wide variety of fuel materials, fuel forms, and neutron spectra. We report all comparison results (C/E values) of effective neutron multiplication factors between calculations and experiments to give a validation data for the prediction accuracy of JENDL-4.0 for criticalities. (author)

  5. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1.1 with MCNP6

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2012-01-01

    Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such

  6. X447 EBR-II Experiment Benchmark for Verification of Audit Code of SFR Metal Fuel

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Yong Won; Bae, Moo-Hoon; Shin, Andong; Suh, Namduk [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2016-10-15

    In KINS (Korea Institute of Nuclear Safety), to prepare audit calculation of PGSFR licensing review, the project has been started to develop the regulatory technology for SFR system including a fuel area. To evaluate the fuel integrity and safety during an irradiation, the fuel performance code must be used for audit calculation. In this study, to verify the new code system, the benchmark analysis is performed. In the benchmark, X447 EBR-II experiment data are used. Additionally, the sensitivity analysis according to mass flux change of coolant is performed. In case of LWR fuel performance modeling, various and advanced models have been proposed and validated based on sufficient in-reactor test results. However, due to the lack of experience of SFR operation, the current understanding of SFR fuel behavior is limited. In this study, X447 EBR-II Experiment data are used for benchmark. The fuel composition of X447 assembly is U-10Zr and PGSFR also uses this composition in initial phase. So we select X447 EBR-II experiment for benchmark analysis. Due to the lack of experience of SFR operation and data, the current understanding of SFR fuel behavior is limited. However, in order to prepare the licensing of PGSFR, regulatory audit technologies of SFR must be secured. So, in this study, to verify the new audit fuel performance analysis code, the benchmark analysis is performed using X447 EBR-II experiment data. Also, the sensitivity analysis with mass flux change of coolant is performed. In terms of verification, it is considered that the results of benchmark and sensitivity analysis are reasonable.

  7. X447 EBR-II Experiment Benchmark for Verification of Audit Code of SFR Metal Fuel

    International Nuclear Information System (INIS)

    Choi, Yong Won; Bae, Moo-Hoon; Shin, Andong; Suh, Namduk

    2016-01-01

    In KINS (Korea Institute of Nuclear Safety), to prepare audit calculation of PGSFR licensing review, the project has been started to develop the regulatory technology for SFR system including a fuel area. To evaluate the fuel integrity and safety during an irradiation, the fuel performance code must be used for audit calculation. In this study, to verify the new code system, the benchmark analysis is performed. In the benchmark, X447 EBR-II experiment data are used. Additionally, the sensitivity analysis according to mass flux change of coolant is performed. In case of LWR fuel performance modeling, various and advanced models have been proposed and validated based on sufficient in-reactor test results. However, due to the lack of experience of SFR operation, the current understanding of SFR fuel behavior is limited. In this study, X447 EBR-II Experiment data are used for benchmark. The fuel composition of X447 assembly is U-10Zr and PGSFR also uses this composition in initial phase. So we select X447 EBR-II experiment for benchmark analysis. Due to the lack of experience of SFR operation and data, the current understanding of SFR fuel behavior is limited. However, in order to prepare the licensing of PGSFR, regulatory audit technologies of SFR must be secured. So, in this study, to verify the new audit fuel performance analysis code, the benchmark analysis is performed using X447 EBR-II experiment data. Also, the sensitivity analysis with mass flux change of coolant is performed. In terms of verification, it is considered that the results of benchmark and sensitivity analysis are reasonable

  8. Solution High-Energy Burst Assembly (SHEBA) results from subprompt critical experiments with uranyl fluoride fuel

    International Nuclear Information System (INIS)

    Cappiello, C.C.; Butterfield, K.B.; Sanchez, R.G.

    1997-10-01

    The Solution High-Energy Burst Assembly (SHEBA) was originally constructed during 1980 and was designed to be a clean free-field geometry, right-circular, cylindrically symmetric critical assembly employing U(5%)O 2 F 2 solution as fuel. A second version of SHEBA, employing the same fuel but equipped with a fuel pump and shielding pit, was commissioned in 1993. This report includes data and operating experience for the 1993 SHEBA only. Solution-fueled benchmark work focused on the development of experimental measurements of the characterization of SHEBA; a summary of the results are given. A description of the system and the experimental results are given in some detail in the report. Experiments were designed to: (1) study the behavior of nuclear excursions in a low-enrichment solution, (2) evaluate accidental criticality alarm detectors for fuel-processing facilities, (3) provide radiation spectra and dose measurements to benchmark radiation transport calculations on a low-enrichment solution system similar to centrifuge enrichment plants, and (4) provide radiation fields to calibrate personnel dosimetry. 15 refs., 37 figs., 10 tabs

  9. Critical experiments analyses by using 70 energy group library based on ENDF/B-VI

    Energy Technology Data Exchange (ETDEWEB)

    Tahara, Yoshihisa; Matsumoto, Hideki [Mitsubishi Heavy Industries Ltd., Yokohama (Japan). Nuclear Energy Systems Engineering Center; Huria, H.C.; Ouisloumen, M.

    1998-03-01

    The newly developed 70-group library has been validated by comparing kinf from a continuous energy Monte-Carlo code MCNP and two dimensional spectrum calculation code PHOENIX-CP. The code employs Discrete Angular Flux Method based on Collision Probability. The library has been also validated against a large number of critical experiments and numerical benchmarks for assemblies with MOX and Gd fuels. (author)

  10. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  11. Experiments for IFR fuel criticality in ZPPR-21

    International Nuclear Information System (INIS)

    Olsen, D.N.; Collins, P.J.; Carpenter, S.G.

    1991-01-01

    A series of benchmark measurements was made in ZPPR-21 to validate criticality calculations for fuel operations in Argonne's Integral Fast Reactor. Six different mixtures of Pu/U/Zr fuel with a graphite reflector were built and criticality was determined by period measurements. The assemblies were isolated from room return problems by a lithium hydride shield. Analysis was done using a fully-detailed model with the VIM Monte Carlo code and ENDF/B-V.2 data. Sensitivity analysis was used to validate the measurements against other benchmark data. A simple RZ model was defined the used with the KENO code. Corrections to the RZ model were provided by the VIM calculations with low statistical uncertainty. 7 refs., 5 figs., 5 tabs

  12. Summary of ORSphere Critical and Reactor Physics Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Marshall, Margaret A.; Bess, John D.

    2016-09-01

    In the early 1970s Dr. John T. Mihalczo (team leader), J. J. Lynn, and J. R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with the GODIVA I experiments. This critical configuration has been evaluated. Preliminary results were presented at ND2013. Since then, the evaluation was finalized and judged to be an acceptable benchmark experiment for the International Criticality Safety Benchmark Experiment Project (ICSBEP). Additionally, reactor physics measurements were performed to determine surface button worths, central void worth, delayed neutron fraction, prompt neutron decay constant, fission density and neutron importance. These measurements have been evaluated and found to be acceptable experiments and are discussed in full detail in the International Handbook of Evaluated Reactor Physics Benchmark Experiments. The purpose of this paper is summary summarize all the critical and reactor physics measurements evaluations and, when possible, to compare them to GODIVA experiment results.

  13. OECD/NEA burnup credit criticality benchmarks phase IIIA: Criticality calculations of BWR spent fuel assemblies in storage and transport

    Energy Technology Data Exchange (ETDEWEB)

    Okuno, Hiroshi; Naito, Yoshitaka [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ando, Yoshihira [Toshiba Corp., Kawasaki, Kanagawa (Japan)

    2000-09-01

    The report describes the final results of Phase IIIA Benchmarks conducted by the Burnup Credit Criticality Calculation Working Group under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD/NEA). The benchmarks are intended to confirm the predictive capability of the current computer code and data library combinations for the neutron multiplication factor (k{sub eff}) of a layer of irradiated BWR fuel assembly array model. In total 22 benchmark problems are proposed for calculations of k{sub eff}. The effects of following parameters are investigated: cooling time, inclusion/exclusion of FP nuclides and axial burnup profile, and inclusion of axial profile of void fraction or constant void fractions during burnup. Axial profiles of fractional fission rates are further requested for five cases out of the 22 problems. Twenty-one sets of results are presented, contributed by 17 institutes from 9 countries. The relative dispersion of k{sub eff} values calculated by the participants from the mean value is almost within the band of {+-}1%{delta}k/k. The deviations from the averaged calculated fission rate profiles are found to be within {+-}5% for most cases. (author)

  14. Reactor physics tests and benchmark analyses of STACY

    International Nuclear Information System (INIS)

    Miyoshi, Yoshinori; Umano, Takuya

    1996-01-01

    The Static Experiment Critical Facility, STACY in the Nuclear Fuel Cycle Safety Engineering Research Facility, NUCEF is a solution type critical facility to accumulate fundamental criticality data on uranyl nitrate solution, plutonium nitrate solution and their mixture. A series of critical experiments have been performed for 10 wt% enriched uranyl nitrate solution using a cylindrical core tank. In these experiments, systematic data of the critical height, differential reactivity of the fuel solution, kinetic parameter and reactor power were measured with changing the uranium concentration of the fuel solution from 313 gU/l to 225 gU/l. Critical data through the first series of experiments for the basic core are reported in this paper for evaluating the accuracy of the criticality safety calculation codes. Benchmark calculations of the neutron multiplication factor k eff for the critical condition were made using a neutron transport code TWOTRAN in the SRAC system and a continuous energy Monte Carlo code MCNP 4A with a Japanese evaluated nuclear data library, JENDL 3.2. (J.P.N.)

  15. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  16. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  17. Intermediate neutron spectrum problems and the intermediate neutron spectrum experiment

    International Nuclear Information System (INIS)

    Jaegers, P.J.; Sanchez, R.G.

    1996-01-01

    Criticality benchmark data for intermediate energy spectrum systems does not exist. These systems are dominated by scattering and fission events induced by neutrons with energies between 1 eV and 1 MeV. Nuclear data uncertainties have been reported for such systems which can not be resolved without benchmark critical experiments. Intermediate energy spectrum systems have been proposed for the geological disposition of surplus fissile materials. Without the proper benchmarking of the nuclear data in the intermediate energy spectrum, adequate criticality safety margins can not be guaranteed. The Zeus critical experiment now under construction will provide this necessary benchmark data

  18. Tests of HAMMER (original) and HAMMER-TECHNION systems with critical experiments

    International Nuclear Information System (INIS)

    Santos, A. dos

    1986-01-01

    Performances of the reactor cell codes HAMMER (original) and HAMMER-TECHNION were tested against experimental results of critical benchmarks. The option made was the utilization of consistent methodologies so that only the NIT (Nordheim Integral Technique) was utilized in the HAMMER-TECHNION. All differences encountered in the analysis made with these systems can be attributed to their basic nuclear data library. Five critical benchmarks was utilized on this study. Surprisingly, the performance of the original HAMMER system was betterthan that of the HAMMER-TECHNION. (Author) [pt

  19. Critical experiment program of heterogeneous core composed for LWR fuel rods and low enriched uranyl nitrate solution

    International Nuclear Information System (INIS)

    Miyoshi, Yoshinori; Yamamoto, Toshihiro; Watanabe, Shouichi; Nakamura, Takemi

    2003-01-01

    In order to stimulate the criticality characteristics of a dissolver in a reprocessing plant, a critical experiment program of heterogeneous cores is under going at a Static Critical Experimental Facility, STACY in Japan Atomic Energy Research Institute, JAERI. The experimental system is composed of 5w/o enriched PWR-type fuel rod array immersed in 6w/o enriched uranyl nitrate solution. First series of experiments are basic benchmark experiments on fundamental critical data in order to validate criticality calculation codes for 'general-form system' classified in the Japanese Criticality Safety Handbook, JCSHB. Second series of experiments are concerning the neutron absorber effects of fission products related to the burn-up credit Level-2. For demonstrating the reactivity effects of fission products, reactivity effects of natural elements such as Sm, Nd, Eu and 103 Rh, 133 Cs, solved in the nitrate solution are to be measured. The objective of third series of experiments is to validate the effect of gadolinium as a soluble neutron poison. Properties of temperature coefficients and kinetic parameters are also studied, since these parameters are important to evaluate the transient behavior of the criticality accident. (author)

  20. A rod-airfoil experiment as a benchmark for broadband noise modeling

    Energy Technology Data Exchange (ETDEWEB)

    Jacob, M.C. [Ecole Centrale de Lyon, Laboratoire de Mecanique des Fluides et d' Acoustique, Ecully Cedex (France); Universite Claude Bernard/Lyon I, Villeurbanne Cedex (France); Boudet, J.; Michard, M. [Ecole Centrale de Lyon, Laboratoire de Mecanique des Fluides et d' Acoustique, Ecully Cedex (France); Casalino, D. [Ecole Centrale de Lyon, Laboratoire de Mecanique des Fluides et d' Acoustique, Ecully Cedex (France); Fluorem SAS, Ecully Cedex (France)

    2005-07-01

    A low Mach number rod-airfoil experiment is shown to be a good benchmark for numerical and theoretical broadband noise modeling. The benchmarking approach is applied to a sound computation from a 2D unsteady-Reynolds-averaged Navier-Stokes (U-RANS) flow field, where 3D effects are partially compensated for by a spanwise statistical model and by a 3D large eddy simulation. The experiment was conducted in the large anechoic wind tunnel of the Ecole Centrale de Lyon. Measurements taken included particle image velocity (PIV) around the airfoil, single hot wire, wall pressure coherence, and far field pressure. These measurements highlight the strong 3D effects responsible for spectral broadening around the rod vortex shedding frequency in the subcritical regime, and the dominance of the noise generated around the airfoil leading edge. The benchmarking approach is illustrated by two examples: the validation of a stochastical noise generation model applied to a 2D U-RANS computation; the assessment of a 3D LES computation using a new subgrid scale (SGS) model coupled to an advanced-time Ffowcs-Williams and Hawkings sound computation. (orig.)

  1. Initialization bias suppression in iterative Monte Carlo calculations: benchmarks on criticality calculation

    International Nuclear Information System (INIS)

    Richet, Y.; Jacquet, O.; Bay, X.

    2005-01-01

    The accuracy of an Iterative Monte Carlo calculation requires the convergence of the simulation output process. The present paper deals with a post processing algorithm to suppress the transient due to initialization applied on criticality calculations. It should be noticed that this initial transient suppression aims only at obtaining a stationary output series, then the convergence of the calculation needs to be guaranteed independently. The transient suppression algorithm consists in a repeated truncation of the first observations of the output process. The truncation of the first observations is performed as long as a steadiness test based on Brownian bridge theory is negative. This transient suppression method was previously tuned for a simplified model of criticality calculations, although this paper focuses on the efficiency on real criticality calculations. The efficiency test is based on four benchmarks with strong source convergence problems: 1) a checkerboard storage of fuel assemblies, 2) a pin cell array with irradiated fuel, 3) 3 one-dimensional thick slabs, and 4) an array of interacting fuel spheres. It appears that the transient suppression method needs to be more widely validated on real criticality calculations before any blind using as a post processing in criticality codes

  2. Benefits of the delta K of depletion benchmarks for burnup credit validation

    International Nuclear Information System (INIS)

    Lancaster, D.; Machiels, A.

    2012-01-01

    Pressurized Water Reactor (PWR) burnup credit validation is demonstrated using the benchmarks for quantifying fuel reactivity decrements, published as 'Benchmarks for Quantifying Fuel Reactivity Depletion Uncertainty,' EPRI Report 1022909 (August 2011). This demonstration uses the depletion module TRITON available in the SCALE 6.1 code system followed by criticality calculations using KENO-Va. The difference between the predicted depletion reactivity and the benchmark's depletion reactivity is a bias for the criticality calculations. The uncertainty in the benchmarks is the depletion reactivity uncertainty. This depletion bias and uncertainty is used with the bias and uncertainty from fresh UO 2 critical experiments to determine the criticality safety limits on the neutron multiplication factor, k eff . The analysis shows that SCALE 6.1 with the ENDF/B-VII 238-group cross section library supports the use of a depletion bias of only 0.0015 in delta k if cooling is ignored and 0.0025 if cooling is credited. The uncertainty in the depletion bias is 0.0064. Reliance on the ENDF/B V cross section library produces much larger disagreement with the benchmarks. The analysis covers numerous combinations of depletion and criticality options. In all cases, the historical uncertainty of 5% of the delta k of depletion ('Kopp memo') was shown to be conservative for fuel with more than 30 GWD/MTU burnup. Since this historically assumed burnup uncertainty is not a function of burnup, the Kopp memo's recommended bias and uncertainty may be exceeded at low burnups, but its absolute magnitude is small. (authors)

  3. Benchmark experiment on molybdenum with graphite by using DT neutrons at JAEA/FNS

    Energy Technology Data Exchange (ETDEWEB)

    Ohta, Masayuki, E-mail: ohta.masayuki@qst.go.jp [National Institutes for Quantum and Radiological Science and Technology, 2-166 Oaza-Obuchi-Aza-Omotedate, Rokkasho-mura, Kamikita-gun, Aomori (Japan); Kwon, Saerom; Sato, Satoshi [National Institutes for Quantum and Radiological Science and Technology, 2-166 Oaza-Obuchi-Aza-Omotedate, Rokkasho-mura, Kamikita-gun, Aomori (Japan); Konno, Chikara [Japan Atomic Energy Agency, 2-4 Shirakata-Shirane, Tokai-mura, Naka-gun, Ibaraki (Japan); Ochiai, Kentaro [National Institutes for Quantum and Radiological Science and Technology, 2-166 Oaza-Obuchi-Aza-Omotedate, Rokkasho-mura, Kamikita-gun, Aomori (Japan)

    2017-01-15

    Highlights: • A new benchmark experiment on molybdenum was conducted with DT neutron at JAEA/FNS. • Dosimetry reaction and fission rates were measured in the molybdenum assembly. • Calculated results with MCNP5 code were compared with the measured ones. • A problem on the capture cross section data of molybdenum was pointed out. - Abstract: In our previous benchmark experiment on Mo at JAEA/FNS, we found problems of the (n,2n) and (n,γ) reaction cross sections of Mo in JENDL-4.0 above a few hundred eV. We perform a new benchmark experiment on Mo with a Mo assembly covered with graphite and Li{sub 2}O blocks in order to validate the nuclear data of Mo in lower energy region than in the previous experiment. Several dosimetry reaction and fission rates are measured and compared with calculated ones with the MCNP5-1.40 code and the recent nuclear data libraries, ENDF/B-VII.1, JEFF-3.2, and JENDL-4.0. It is suggested that the (n,γ) reaction cross section of {sup 95}Mo should be larger in the tail region below the large resonance of 45 eV in these nuclear data libraries.

  4. RUNE benchmarks

    DEFF Research Database (Denmark)

    Peña, Alfredo

    This report contains the description of a number of benchmarks with the purpose of evaluating flow models for near-shore wind resource estimation. The benchmarks are designed based on the comprehensive database of observations that the RUNE coastal experiment established from onshore lidar...

  5. Nuclear criticality predictability

    International Nuclear Information System (INIS)

    Briggs, J.B.

    1999-01-01

    As a result of lots of efforts, a large portion of the tedious and redundant research and processing of critical experiment data has been eliminated. The necessary step in criticality safety analyses of validating computer codes with benchmark critical data is greatly streamlined, and valuable criticality safety experimental data is preserved. Criticality safety personnel in 31 different countries are now using the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments'. Much has been accomplished by the work of the ICSBEP. However, evaluation and documentation represents only one element of a successful Nuclear Criticality Safety Predictability Program and this element only exists as a separate entity, because this work was not completed in conjunction with the experimentation process. I believe; however, that the work of the ICSBEP has also served to unify the other elements of nuclear criticality predictability. All elements are interrelated, but for a time it seemed that communications between these elements was not adequate. The ICSBEP has highlighted gaps in data, has retrieved lost data, has helped to identify errors in cross section processing codes, and has helped bring the international criticality safety community together in a common cause as true friends and colleagues. It has been a privilege to associate with those who work so diligently to make the project a success. (J.P.N.)

  6. Analysis of kyoto university reactor physics critical experiments using NCNSRC calculation methodology

    International Nuclear Information System (INIS)

    Amin, E.; Hathout, A.M.; Shouman, S.

    1997-01-01

    The kyoto university reactor physics experiments on the university critical assembly is used to benchmark validate the NCNSRC calculations methodology. This methodology has two lines, diffusion and Monte Carlo. The diffusion line includes the codes WIMSD4 for cell calculations and the two dimensional diffusion code DIXY2 for core calculations. The transport line uses the MULTIKENO-Code vax Version. Analysis is performed for the criticality, and the temperature coefficients of reactivity (TCR) for the light water moderated and reflected cores, of the different cores utilized in the experiments. The results of both Eigen value and TCR approximately reproduced the experimental and theoretical Kyoto results. However, some conclusions are drawn about the adequacy of the standard wimsd4 library. This paper is an extension of the NCNSRC efforts to assess and validate computer tools and methods for both Et-R R-1 and Et-MMpr-2 research reactors. 7 figs., 1 tab

  7. Identification of critical parameters for PEMFC stack performance characterization and control strategies for reliable and comparable stack benchmarking

    DEFF Research Database (Denmark)

    Mitzel, Jens; Gülzow, Erich; Kabza, Alexander

    2016-01-01

    This paper is focused on the identification of critical parameters and on the development of reliable methodologies to achieve comparable benchmark results. Possibilities for control sensor positioning and for parameter variation in sensitivity tests are discussed and recommended options for the ...

  8. Benchmark calculations for VENUS-2 MOX -fueled reactor dosimetry

    International Nuclear Information System (INIS)

    Kim, Jong Kung; Kim, Hong Chul; Shin, Chang Ho; Han, Chi Young; Na, Byung Chan

    2004-01-01

    As a part of a Nuclear Energy Agency (NEA) Project, it was pursued the benchmark for dosimetry calculation of the VENUS-2 MOX-fueled reactor. In this benchmark, the goal is to test the current state-of-the-art computational methods of calculating neutron flux to reactor components against the measured data of the VENUS-2 MOX-fuelled critical experiments. The measured data to be used for this benchmark are the equivalent fission fluxes which are the reaction rates divided by the U 235 fission spectrum averaged cross-section of the corresponding dosimeter. The present benchmark is, therefore, defined to calculate reaction rates and corresponding equivalent fission fluxes measured on the core-mid plane at specific positions outside the core of the VENUS-2 MOX-fuelled reactor. This is a follow-up exercise to the previously completed UO 2 -fuelled VENUS-1 two-dimensional and VENUS-3 three-dimensional exercises. The use of MOX fuel in LWRs presents different neutron characteristics and this is the main interest of the current benchmark compared to the previous ones

  9. Quality management benchmarking: FDA compliance in pharmaceutical industry.

    Science.gov (United States)

    Jochem, Roland; Landgraf, Katja

    2010-01-01

    By analyzing and comparing industry and business best practice, processes can be optimized and become more successful mainly because efficiency and competitiveness increase. This paper aims to focus on some examples. Case studies are used to show knowledge exchange in the pharmaceutical industry. Best practice solutions were identified in two companies using a benchmarking method and five-stage model. Despite large administrations, there is much potential regarding business process organization. This project makes it possible for participants to fully understand their business processes. The benchmarking method gives an opportunity to critically analyze value chains (a string of companies or players working together to satisfy market demands for a special product). Knowledge exchange is interesting for companies that like to be global players. Benchmarking supports information exchange and improves competitive ability between different enterprises. Findings suggest that the five-stage model improves efficiency and effectiveness. Furthermore, the model increases the chances for reaching targets. The method gives security to partners that did not have benchmarking experience. The study identifies new quality management procedures. Process management and especially benchmarking is shown to support pharmaceutical industry improvements.

  10. Analysis of the international criticality benchmark no 19 of a realistic fuel dissolver

    International Nuclear Information System (INIS)

    Smith, H.J.; Santamarina, A.

    1991-01-01

    The dispersion of the order of 12000 pcm in the results of the international criticality fuel dissolver benchmark calculation, exercise OECD/19, showed the necessity of analysing the calculational methods used in this case. The APOLLO/PIC method developed to treat this type of problem permits us to propose international reference values. The problem studied here, led us to investigate two supplementary parameters in addition to the double heterogeneity of the fuel: the reactivity variation as a function of moderation and the effects of the size of the fuel pellets during dissolution. The following conclusions were obtained: The fast cross-section sets used by the international SCALE package introduces a bias of - 3000 pcm in undermoderated lattices. More generally, the fast and resonance nuclear data in criticality codes are not sufficiently reliable. Geometries with micro-pellets led to an underestimation of reactivity at the end of dissolution of 3000 pcm in certain 1988 Sn calculations; this bias was avoided in the up-dated 1990 computation because of a correct use of calculation tools. The reactivity introduced by the dissolved fuel is underestimated by 3000 pcm in contributions based on the standard NITAWL module in the SCALE code. More generally, the neutron balance analysis pointed out that standard ND self shielding formalism cannot account for 238 U resonance mutual self-shielding in the pellet-fissile liquor interaction. The combination of these three types of bias explain the underestimation of all of the international contributions of the reactivity of dissolver lattices by -2000 to -6000 pcm. The improved 1990 calculations confirm the need to use rigorous methods in the calculation of systems which involve the fuel double heterogeneity. This study points out the importance of periodic benchmarking exercises for probing the efficacity of criticality codes, data libraries and the users

  11. Comparative sensitivity study of some criticality safety benchmark experiments using JEFF-3.1.2, JEFF-3.2T and ENDF/B-VII.1

    International Nuclear Information System (INIS)

    Kooyman, Timothee; Messaoudia, Nadia

    2014-01-01

    A sensitivity study on a set of evaluated criticality benchmarks with two versions of the JEFF nuclear data library, namely JEFF-3.1.2 and JEFF-3.2T, and ENDF/B-VII.1 was performed using MNCP(X) 2.6.0. As these benchmarks serve to estimate the upper safety limit for criticality risk analysis at SCK.CEN the sensitivity of their results to nuclear data is an important parameter to asses. Several nuclides were identified as being responsible for an evident change in the effective multiplication factor k eff : 235 U, 239 Pu, 240 Pu, 54 Fe, 56 Fe, 57 Fe and 208 Pb. A high sensitivity was found to the fission cross-section of all the fissile material in the study. Additionally, a smaller sensitivity to inelastic and capture cross-section of 235 U and 240 Pu was also found. Sensitivity to the scattering law for non-fissile material was postulated. The biggest change in the k eff due to non-fissile material was due to 208 Pb evaluation (±700 pcm), followed by 56 Fe (±360 pcm) for both versions of the JEFF library. Changes due to 235 U (±300 pcm) and Pu isotopes (±120 pcm for 239 Pu and ±80 pcm for 240 Pu) were found only with JEFF-3.1.2. 238 U was found to have no effect on the k eff . Significant improvements were identified between the two versions of the JEFF library. No further differences were found between the JEFF-3.2T and the ENDF/B-VII.1 calculations involving 235 U or Pu. (authors)

  12. RELAP5/MOD2 benchmarking study: Critical heat flux under low-flow conditions

    International Nuclear Information System (INIS)

    Ruggles, E.; Williams, P.T.

    1990-01-01

    Experimental studies by Mishima and Ishii performed at Argonne National Laboratory and subsequent experimental studies performed by Mishima and Nishihara have investigated the critical heat flux (CHF) for low-pressure low-mass flux situations where low-quality burnout may occur. These flow situations are relevant to long-term decay heat removal after a loss of forced flow. The transition from burnout at high quality to burnout at low quality causes very low burnout heat flux values. Mishima and Ishii postulated a model for the low-quality burnout based on flow regime transition from churn turbulent to annular flow. This model was validated by both flow visualization and burnout measurements. Griffith et al. also studied CHF in low mass flux, low-pressure situations and correlated data for upflows, counter-current flows, and downflows with the local fluid conditions. A RELAP5/MOD2 CHF benchmarking study was carried out investigating the performance of the code for low-flow conditions. Data from the experimental study by Mishima and Ishii were the basis for the benchmark comparisons

  13. Critical experiments supporting underwater storage of tightly packed configurations of spent fuel rods

    International Nuclear Information System (INIS)

    Hoovler, G.S.; Baldwin, M.N.

    1981-04-01

    Criticla arrays of 2.5%-enriched UO 2 fuel rods that simulate underwater rod storage of spent power reactor fuel are being constructed. Rod storage is a term used to describe a spent fuel storage concept in which the fuel bundles are disassembled and the rods are packed into specially designed cannisters. Rod storage would substantially increase the amount of fuel that could be stored in available space. These experiments are providing criticality data against which to benchmark nuclear codes used to design tightly packed rod storage racks

  14. NUPEC BWR Full-size Fine-mesh Bundle Test (BFBT) Benchmark. Volume II: uncertainty and sensitivity analyses of void distribution and critical power - Specification

    International Nuclear Information System (INIS)

    Aydogan, F.; Hochreiter, L.; Ivanov, K.; Martin, M.; Utsuno, H.; Sartori, E.

    2010-01-01

    This report provides the specification for the uncertainty exercises of the international OECD/NEA, NRC and NUPEC BFBT benchmark problem including the elemental task. The specification was prepared jointly by Pennsylvania State University (PSU), USA and the Japan Nuclear Energy Safety (JNES) Organisation, in cooperation with the OECD/NEA and the Commissariat a l'energie atomique (CEA Saclay, France). The work is sponsored by the US NRC, METI-Japan, the OECD/NEA and the Nuclear Engineering Program (NEP) of Pennsylvania State University. This uncertainty specification covers the fourth exercise of Phase I (Exercise-I-4), and the third exercise of Phase II (Exercise II-3) as well as the elemental task. The OECD/NRC BFBT benchmark provides a very good opportunity to apply uncertainty analysis (UA) and sensitivity analysis (SA) techniques and to assess the accuracy of thermal-hydraulic models for two-phase flows in rod bundles. During the previous OECD benchmarks, participants usually carried out sensitivity analysis on their models for the specification (initial conditions, boundary conditions, etc.) to identify the most sensitive models or/and to improve the computed results. The comprehensive BFBT experimental database (NEA, 2006) leads us one step further in investigating modelling capabilities by taking into account the uncertainty analysis in the benchmark. The uncertainties in input data (boundary conditions) and geometry (provided in the benchmark specification) as well as the uncertainties in code models can be accounted for to produce results with calculational uncertainties and compare them with the measurement uncertainties. Therefore, uncertainty analysis exercises were defined for the void distribution and critical power phases of the BFBT benchmark. This specification is intended to provide definitions related to UA/SA methods, sensitivity/ uncertainty parameters, suggested probability distribution functions (PDF) of sensitivity parameters, and selected

  15. CSRL-V ENDF/B-V 227-group neutron cross-section library and its application to thermal-reactor and criticality safety benchmarks

    International Nuclear Information System (INIS)

    Ford, W.E. III; Diggs, B.R.; Knight, J.R.; Greene, N.M.; Petrie, L.M.; Webster, C.C.; Westfall, R.M.; Wright, R.Q.; Williams, M.L.

    1982-01-01

    Characteristics and contents of the CSRL-V (Criticality Safety Reference Library based on ENDF/B-V data) 227-neutron-group AMPX master and pointwise cross-section libraries are described. Results obtained in using CSRL-V to calculate performance parameters of selected thermal reactor and criticality safety benchmarks are discussed

  16. The spent fuel safety experiment

    International Nuclear Information System (INIS)

    Harmms, G.A.; Davis, F.J.; Ford, J.T.

    1995-01-01

    The Department of Energy is conducting an ongoing investigation of the consequences of taking fuel burnup into account in the design of spent fuel transportation packages. A series of experiments, collectively called the Spent Fuel Safety Experiment (SFSX), has been devised to provide integral benchmarks for testing computer-generated predictions of spent fuel behavior. A set of experiments is planned in which sections of unirradiated fuel rods are interchanged with similar sections of spent PWR fuel rods in a critical assembly. By determining the critical size of the arrays, one can obtain benchmark data for comparison with criticality safety calculations. The integral reactivity worth of the spent fuel can be assessed by comparing the measured delayed critical fuel loading with and without spent fuel. An analytical effort to model the experiments and anticipate the core loadings required to yield the delayed critical conditions runs in parallel with the experimental effort

  17. Analysis of benchmark experiments for testing the IKE multigroup cross-section libraries based on ENDF/B-III and IV

    International Nuclear Information System (INIS)

    Keinert, J.; Mattes, M.

    1975-01-01

    Benchmark experiments offer the most direct method for validation of nuclear cross-section sets and calculational methods. For 16 fast and thermal critical assemblies containing uranium and/or plutonium of different compositions we compared our calculational results with measured integral quantities, such as ksub(eff), central reaction rate ratios or fast and thermal activation (dis)advantage factors. Cause of the simple calculational modelling of these assemblies the calculations proved as a good test for the IKE multigroup cross-section libraries essentially based on ENDF/B-IV. In general, our calculational results are in excellent agreement with the measured values. Only with some critical systems the basic ENDF/B-IV data proved to be insufficient in calculating ksub(eff), probably due to Pu neutron data and U 238 fast capture cross-sections. (orig.) [de

  18. Thermal lattice benchmarks for testing basic evaluated data files, developed with MCNP4B

    International Nuclear Information System (INIS)

    Maucec, M.; Glumac, B.

    1996-01-01

    The development of unit cell and full reactor core models of DIMPLE S01A and TRX-1 and TRX-2 benchmark experiments, using Monte Carlo computer code MCNP4B is presented. Nuclear data from ENDF/B-V and VI version of cross-section library were used in the calculations. In addition, a comparison to results obtained with the similar models and cross-section data from the EJ2-MCNPlib library (which is based upon the JEF-2.2 evaluation) developed in IRC Petten, Netherlands is presented. The results of the criticality calculation with ENDF/B-VI data library, and a comparison to results obtained using JEF-2.2 evaluation, confirm the MCNP4B full core model of a DIMPLE reactor as a good benchmark for testing basic evaluated data files. On the other hand, the criticality calculations results obtained using the TRX full core models show less agreement with experiment. It is obvious that without additional data about the TRX geometry, our TRX models are not suitable as Monte Carlo benchmarks. (author)

  19. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  20. Simulation of hydrogen deflagration experimentBenchmark exercise with lumped-parameter codes

    Energy Technology Data Exchange (ETDEWEB)

    Kljenak, Ivo, E-mail: ivo.kljenak@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Kuznetsov, Mikhail, E-mail: mike.kuznetsov@kit.edu [Karlsruhe Institute of Technology, Kaiserstraße 12, 76131 Karlsruhe (Germany); Kostka, Pal, E-mail: kostka@nubiki.hu [NUBIKI Nuclear Safety Research Institute, Konkoly-Thege Miklós út 29-33, 1121 Budapest (Hungary); Kubišova, Lubica, E-mail: lubica.kubisova@ujd.gov.sk [Nuclear Regulatory Authority of the Slovak Republic, Bajkalská 27, 82007 Bratislava (Slovakia); Maltsev, Mikhail, E-mail: maltsev_MB@aep.ru [JSC Atomenergoproekt, 1, st. Podolskykh Kursantov, Moscow (Russian Federation); Manzini, Giovanni, E-mail: giovanni.manzini@rse-web.it [Ricerca sul Sistema Energetico, Via Rubattino 54, 20134 Milano (Italy); Povilaitis, Mantas, E-mail: mantas.p@mail.lei.lt [Lithuania Energy Institute, Breslaujos g.3, 44403 Kaunas (Lithuania)

    2015-03-15

    Highlights: • Blind and open simulations of hydrogen combustion experiment in large-scale containment-like facility with different lumped-parameter codes. • Simulation of axial as well as radial flame propagation. • Confirmation of adequacy of lumped-parameter codes for safety analyses of actual nuclear power plants. - Abstract: An experiment on hydrogen deflagration (Upward Flame Propagation Experiment – UFPE) was proposed by the Jozef Stefan Institute (Slovenia) and performed in the HYKA A2 facility at the Karlsruhe Institute of Technology (Germany). The experimental results were used to organize a benchmark exercise for lumped-parameter codes. Six organizations (JSI, AEP, LEI, NUBIKI, RSE and UJD SR) participated in the benchmark exercise, using altogether four different computer codes: ANGAR, ASTEC, COCOSYS and ECART. Both blind and open simulations were performed. In general, all the codes provided satisfactory results of the pressure increase, whereas the results of the temperature show a wider dispersal. Concerning the flame axial and radial velocities, the results may be considered satisfactory, given the inherent simplification of the lumped-parameter description compared to the local instantaneous description.

  1. Simulation of hydrogen deflagration experimentBenchmark exercise with lumped-parameter codes

    International Nuclear Information System (INIS)

    Kljenak, Ivo; Kuznetsov, Mikhail; Kostka, Pal; Kubišova, Lubica; Maltsev, Mikhail; Manzini, Giovanni; Povilaitis, Mantas

    2015-01-01

    Highlights: • Blind and open simulations of hydrogen combustion experiment in large-scale containment-like facility with different lumped-parameter codes. • Simulation of axial as well as radial flame propagation. • Confirmation of adequacy of lumped-parameter codes for safety analyses of actual nuclear power plants. - Abstract: An experiment on hydrogen deflagration (Upward Flame Propagation Experiment – UFPE) was proposed by the Jozef Stefan Institute (Slovenia) and performed in the HYKA A2 facility at the Karlsruhe Institute of Technology (Germany). The experimental results were used to organize a benchmark exercise for lumped-parameter codes. Six organizations (JSI, AEP, LEI, NUBIKI, RSE and UJD SR) participated in the benchmark exercise, using altogether four different computer codes: ANGAR, ASTEC, COCOSYS and ECART. Both blind and open simulations were performed. In general, all the codes provided satisfactory results of the pressure increase, whereas the results of the temperature show a wider dispersal. Concerning the flame axial and radial velocities, the results may be considered satisfactory, given the inherent simplification of the lumped-parameter description compared to the local instantaneous description

  2. MCNP benchmark analyses of critical experiments for space nuclear thermal propulsion

    International Nuclear Information System (INIS)

    Selcow, E.C.; Cerbone, R.J.; Ludewig, H.

    1993-01-01

    The particle-bed reactor (PBR) system is being developed for use in the Space Nuclear Thermal Propulsion (SNTP) Program. This reactor system is characterized by a highly heterogeneous, compact configuration with many streaming pathways. The neutronics analyses performed for this system must be able to accurately predict reactor criticality, kinetics parameters, material worths at various temperatures, feedback coefficients, and detailed fission power and heating distributions. The latter includes coupled axial, radial, and azimuthal profiles. These responses constitute critical inputs and interfaces with the thermal-hydraulics design and safety analyses of the system

  3. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  4. Benchmarking in Czech Higher Education

    Directory of Open Access Journals (Sweden)

    Plaček Michal

    2015-12-01

    Full Text Available The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Based on an analysis of the current situation and existing needs in the Czech Republic, as well as on a comparison with international experience, recommendations for public policy are made, which lie in the design of a model of a collaborative benchmarking for Czech economics and management in higher-education programs. Because the fully complex model cannot be implemented immediately – which is also confirmed by structured interviews with academics who have practical experience with benchmarking –, the final model is designed as a multi-stage model. This approach helps eliminate major barriers to the implementation of benchmarking.

  5. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  6. Criticality safety validation of MCNP5 using continuous energy libraries

    International Nuclear Information System (INIS)

    Salome, Jean A.D.; Pereira, Claubia; Assuncao, Jonathan B.A.; Veloso, Maria Auxiliadora F.; Costa, Antonella L.; Silva, Clarysson A.M. da

    2013-01-01

    The study of subcritical systems is very important in the design, installation and operation of various devices, mainly nuclear reactors and power plants. The information generated by these systems guide the decisions to be taken in the executive project, the economic viability and the safety measures to be employed in a nuclear facility. Simulating some experiments from the International Handbook of Evaluated Criticality Safety Benchmark Experiments, the code MCNP5 was validated to nuclear criticality analysis. Its continuous libraries were used. The average values and standard deviation (SD) were evaluated. The results obtained with the code are very similar to the values obtained by the benchmark experiments. (author)

  7. WWER-1000 Burnup Credit Benchmark (CB5)

    International Nuclear Information System (INIS)

    Manolova, M.A.

    2002-01-01

    In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)

  8. OECD/NEA expert group on uncertainty analysis for criticality safety assessment: Results of benchmark on sensitivity calculation (phase III)

    Energy Technology Data Exchange (ETDEWEB)

    Ivanova, T.; Laville, C. [Institut de Radioprotection et de Surete Nucleaire IRSN, BP 17, 92262 Fontenay aux Roses (France); Dyrda, J. [Atomic Weapons Establishment AWE, Aldermaston, Reading, RG7 4PR (United Kingdom); Mennerdahl, D. [E Mennerdahl Systems EMS, Starvaegen 12, 18357 Taeby (Sweden); Golovko, Y.; Raskach, K.; Tsiboulia, A. [Inst. for Physics and Power Engineering IPPE, 1, Bondarenko sq., 249033 Obninsk (Russian Federation); Lee, G. S.; Woo, S. W. [Korea Inst. of Nuclear Safety KINS, 62 Gwahak-ro, Yuseong-gu, Daejeon 305-338 (Korea, Republic of); Bidaud, A.; Sabouri, P. [Laboratoire de Physique Subatomique et de Cosmologie LPSC, CNRS-IN2P3/UJF/INPG, Grenoble (France); Patel, A. [U.S. Nuclear Regulatory Commission (NRC), Washington, DC 20555-0001 (United States); Bledsoe, K.; Rearden, B. [Oak Ridge National Laboratory ORNL, M.S. 6170, P.O. Box 2008, Oak Ridge, TN 37831 (United States); Gulliford, J.; Michel-Sendis, F. [OECD/NEA, 12, Bd des Iles, 92130 Issy-les-Moulineaux (France)

    2012-07-01

    The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)

  9. Benchmark enclosure fire suppression experiments - phase 1 test report.

    Energy Technology Data Exchange (ETDEWEB)

    Figueroa, Victor G.; Nichols, Robert Thomas; Blanchat, Thomas K.

    2007-06-01

    A series of fire benchmark water suppression tests were performed that may provide guidance for dispersal systems for the protection of high value assets. The test results provide boundary and temporal data necessary for water spray suppression model development and validation. A review of fire suppression in presented for both gaseous suppression and water mist fire suppression. The experimental setup and procedure for gathering water suppression performance data are shown. Characteristics of the nozzles used in the testing are presented. Results of the experiments are discussed.

  10. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  11. Benchmark of the HDR E11.2 containment hydrogen mixing experiment using the MAAP4 code

    International Nuclear Information System (INIS)

    Lee, Sung, Jin; Paik, Chan Y.; Henry, R.E.

    1997-01-01

    The MAAP4 code was benchmarked against the hydrogen mixing experiment in a full-size nuclear reactor containment. This particular experiment, designated as E11.2, simulated a small loss-of-coolant-accident steam blowdown into the containment followed by the release of a hydrogen-helium gas mixture. It also incorporated external spray cooling of the steel dome near the end of the transient. Specifically, the objective of this bench-mark was to demonstrate that MAAP4, using subnodal physics, can predict an observed gas stratification in the containment

  12. Dry critical experiments and analyses performed in support of the Topaz-2 Safety Program

    International Nuclear Information System (INIS)

    Pelowitz, D.B.; Sapir, J.; Glushkov, E.S.; Ponomarev-Stepnoi, N.N.; Bubelev, V.G.; Kompanietz, G.B.; Krutov, A.M.; Polyakov, D.N.; Loynstev, V.A.

    1994-01-01

    In December 1991, the Strategic Defense Initiative Organization decided to investigate the possibility of launching a Russian Topaz-2 space nuclear power system. Functional safety requirements developed for the Topaz mission mandated that the reactor remain subcritical when flooded and immersed in water. Initial experiments and analyses performed in Russia and the United States indicated that the reactor could potentially become supercritical in several water- or sand-immersion scenarios. Consequently, a series of critical experiments was performed on the Narciss M-II facility at the Kurchatov Institute to measure the reactivity effects of water and sand immersion, to quantify the effectiveness of reactor modifications proposed to preclude criticality, and to benchmark the calculational methods and nuclear data used in the Topaz-2 safety analyses. In this paper we describe the Narciss M-II experimental configurations along with the associated calculational models and methods. We also present and compare the measured and calculated results for the dry experimental configurations

  13. Dry critical experiments and analyses performed in support of the TOPAZ-2 safety program

    International Nuclear Information System (INIS)

    Pelowitz, D.B.; Sapir, J.; Glushkov, E.S.; Ponomarev-Stepnoi, N.N.; Bubelev, V.G.; Kompanietz, G.B.; Krutov, A.M.; Polyakov, D.N.; Lobynstev, V.A.

    1995-01-01

    In December 1991, the Strategic Defense Initiative Organization decided to investigate the possibility of launching a Russian Topaz-2 space nuclear power system. Functional safety requirements developed for the Topaz mission mandated that the reactor remain subcritical when flooded and immersed in water. Initial experiments and analyses performed in Russia and the United States indicated that the reactor could potentially become supercritical in several water- or sand-immersion scenarios. Consequently, a series of critical experiments was performed on the Narciss M-II facility at the Kurchatov Institute to measure the reactivity effects of water and sand immersion, to quantify the effectiveness of reactor modifications proposed to preclude criticality, and to benchmark the calculational methods and nuclear data used in the Topaz-2 safety analyses. In this paper we describe the Narciss M-II experimental configurations along with the associated calculational models and methods. We also present and compare the measured and calculated results for the dry experimental configurations. copyright 1995 American Institute of Physics

  14. Availability of Neutronics Benchmarks in the ICSBEP and IRPhEP Handbooks for Computational Tools Testing

    Energy Technology Data Exchange (ETDEWEB)

    Bess, John D.; Briggs, J. Blair; Ivanova, Tatiana; Hill, Ian; Gulliford, Jim

    2017-02-01

    In the past several decades, numerous experiments have been performed worldwide to support reactor operations, measurements, design, and nuclear safety. Those experiments represent an extensive international investment in infrastructure, expertise, and cost, representing significantly valuable resources of data supporting past, current, and future research activities. Those valuable assets represent the basis for recording, development, and validation of our nuclear methods and integral nuclear data [1]. The loss of these experimental data, which has occurred all too much in the recent years, is tragic. The high cost to repeat many of these measurements can be prohibitive, if not impossible, to surmount. Two international projects were developed, and are under the direction of the Organisation for Co-operation and Development Nuclear Energy Agency (OECD NEA) to address the challenges of not just data preservation, but evaluation of the data to determine its merit for modern and future use. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was established to identify and verify comprehensive critical benchmark data sets; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data [2]. Similarly, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications [3]. Annually, contributors from around the world continue to collaborate in the evaluation and review of select benchmark experiments for preservation and dissemination. The extensively peer-reviewed integral benchmark data can then be utilized to support nuclear design and safety analysts to validate the analytical tools, methods, and data needed for next

  15. Educating Next Generation Nuclear Criticality Safety Engineers at the Idaho National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    J. D. Bess; J. B. Briggs; A. S. Garcia

    2011-09-01

    One of the challenges in educating our next generation of nuclear safety engineers is the limitation of opportunities to receive significant experience or hands-on training prior to graduation. Such training is generally restricted to on-the-job-training before this new engineering workforce can adequately provide assessment of nuclear systems and establish safety guidelines. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) can provide students and young professionals the opportunity to gain experience and enhance critical engineering skills. The ICSBEP and IRPhEP publish annual handbooks that contain evaluations of experiments along with summarized experimental data and peer-reviewed benchmark specifications to support the validation of neutronics codes, nuclear cross-section data, and the validation of reactor designs. Participation in the benchmark process not only benefits those who use these Handbooks within the international community, but provides the individual with opportunities for professional development, networking with an international community of experts, and valuable experience to be used in future employment. Traditionally students have participated in benchmarking activities via internships at national laboratories, universities, or companies involved with the ICSBEP and IRPhEP programs. Additional programs have been developed to facilitate the nuclear education of students while participating in the benchmark projects. These programs include coordination with the Center for Space Nuclear Research (CSNR) Next Degree Program, the Collaboration with the Department of Energy Idaho Operations Office to train nuclear and criticality safety engineers, and student evaluations as the basis for their Master's thesis in nuclear engineering.

  16. Educating Next Generation Nuclear Criticality Safety Engineers at the Idaho National Laboratory

    International Nuclear Information System (INIS)

    Bess, J.D.; Briggs, J.B.; Garcia, A.S.

    2011-01-01

    One of the challenges in educating our next generation of nuclear safety engineers is the limitation of opportunities to receive significant experience or hands-on training prior to graduation. Such training is generally restricted to on-the-job-training before this new engineering workforce can adequately provide assessment of nuclear systems and establish safety guidelines. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) can provide students and young professionals the opportunity to gain experience and enhance critical engineering skills. The ICSBEP and IRPhEP publish annual handbooks that contain evaluations of experiments along with summarized experimental data and peer-reviewed benchmark specifications to support the validation of neutronics codes, nuclear cross-section data, and the validation of reactor designs. Participation in the benchmark process not only benefits those who use these Handbooks within the international community, but provides the individual with opportunities for professional development, networking with an international community of experts, and valuable experience to be used in future employment. Traditionally students have participated in benchmarking activities via internships at national laboratories, universities, or companies involved with the ICSBEP and IRPhEP programs. Additional programs have been developed to facilitate the nuclear education of students while participating in the benchmark projects. These programs include coordination with the Center for Space Nuclear Research (CSNR) Next Degree Program, the Collaboration with the Department of Energy Idaho Operations Office to train nuclear and criticality safety engineers, and student evaluations as the basis for their Master's thesis in nuclear engineering.

  17. Validation of new evaluations for the main fuel nuclides using the ICSBEP handbook benchmarks

    International Nuclear Information System (INIS)

    Koscheev, V.; Manturov, G.; Rozhikhin, Y.; Tsibulya, A.

    2008-01-01

    The newest evaluations, adopted for Endf/B-VII.0, JEFF-3.1, JENDL-3.3 and the Russian library RUSFOND nuclear data files, for the most important fissile isotopes 235 U, 238 U, and 239 Pu are compared between each other and tested through a set of integral experiments, among them removal cross section under fission threshold of 238 U, critical infinite media Scherzo-556, and ICSBEP Handbook criticality safety benchmarks. Globally our benchmarking shows that these evaluations are in many cases very close. However, essential differences are observed through the analysis of critical systems with big enough content of 238 U. Large diversity still exists in inelastic scattering cross sections. We have to note that the divergence in the 238 U capture cross-section that existed in previous evaluations, has practically disappeared

  18. Coupled fast-thermal core 'HERBE', as the benchmark experiment at the RB reactor

    International Nuclear Information System (INIS)

    Pesic, M.

    2003-10-01

    Validation of the well-known Monte Carlo code MCNP TM against measured criticality data for the coupled fast-thermal HERBE. System at the RB research reactor is shown in this paper. Experimental data are obtained for regular HERBE core and for the cases of controlled flooding of the neutron converter zone by heavy water. Earlier calculations of these criticality parameters, done by combination of transport and diffusion codes using 2D geometry model are also compared to new calculations carried out by the MCNP code in 3D geometry, applying new detailed 3D model of the HEU fuel slug, developed recently. Satisfactory agreements in comparison of the HERBE criticality calculation results with experimental data, in spite complex heterogeneous composition of the HERBE core, are obtained and confirmed that HERBE core could be used as a criticality benchmark for coupled fast-thermal core. (author)

  19. Fast and thermal data testing of 233U critical assemblies

    International Nuclear Information System (INIS)

    Wright, R.Q.; Jordan, W.C.; Leal, L.C.

    1999-01-01

    Many sources have been used to obtain 233 U benchmark descriptions. Unfortunately, some of these are not reliable since a thorough and complete benchmark evaluation often has not been done. For 24 yr a principal source for 233 U benchmarks has been the Cross Section Evaluation Working Group (CSEWG) Benchmark Specifications. The CSEWG specifications included only two fast benchmarks and three thermal benchmarks. The thermal benchmarks were H 2 O-moderated thorium-oxide exponential lattices. Since the thorium-oxide lattices were exponential experiments, they have not been widely used. CSEWG has also used the 233 U Oak Ridge National Laboratory (ORNL) spheres for many years. One advantage of the CSEWG fast benchmarks, JEZEBEL-23 and FLATTOP-23, is that experiments were done for central-reaction-rate ratios. These reaction-rate ratios provide very valuable information to data testers and evaluators that would not otherwise be available. In recent years the International Handbook of Evaluated Criticality Safety Benchmark Experiments has, in general, been a very useful and reliable source. The Handbook does not include central-reaction-rate ratio experiments, however. A new set of 233 U benchmark experiments has been added to the most recent release of the Handbook, U233-SOL-THERM-004. These are paraffin-reflected cylinders of 233 U uranyl-nitrate solutions. Unfortunately, the estimated benchmark uncertainties are on the order of 0.9 to 1.0% in k eff . Benchmark testing has been done for some of these U233-SOL-THERM-004 experiments. The authors have also discovered that the benchmark specifications for the Thomas uranyl-nitrate experiments given in Ref. 5 are incorrect. One problem with the Ref. 5 specifications is that the excess acid was not included. As part of this work, the authors developed revised specifications that include an excess acid correlation based on information from the experimental logbook

  20. Review for session K - benchmarks

    International Nuclear Information System (INIS)

    McCracken, A.K.

    1980-01-01

    Eight of the papers to be considered in Session K are directly concerned, at least in part, with the Pool Critical Assembly (P.C.A.) benchmark at Oak Ridge. The remaining seven papers in this session, the subject of this review, are concerned with a variety of topics related to the general theme of Benchmarks and will be considered individually

  1. Critical experiments supporting underwater storage of tightly packed configurations of spent fuel pins. Technical progress report, January 1-March 31, 1981

    International Nuclear Information System (INIS)

    Hoovler, G.S.; Baldwin, M.N.

    1981-04-01

    Critical experiments are in progress on arrays of 2 1/2% enriched UO 2 fuel pins simulating underwater pin storage of spent power reactor fuel. Pin storage refers to a spent fuel storage concept in which the fuel assemblies are dismantled and the fuel pins are tightly packed into specially designed canisters. These experiments are providing benchmark data with which to validate nuclear codes used to design spent fuel pin storage racks

  2. Risk management for operations of the LANL Critical Experiments Facility

    International Nuclear Information System (INIS)

    Paternoster, R.; Butterfield, K.

    1998-01-01

    The Los Alamos Critical Experiments Facility (LACEF) currently operates two burst reactors (Godiva-IV and Skua), one solution assembly [the Solution High-Energy Burst Assembly (SHEBA)], two fast-spectrum benchmark assemblies (Flattop and Big Ten), and five general-purpose remote assembly machines that may be configured with nuclear materials and assembled by remote control. Special nuclear materials storage vaults support these and other operations at the site. With this diverse set of operations, several approaches are possible in the analysis and management of risk. The most conservative approach would be to write a safety analysis report (SAR) for each assembly and experiment. A more cost-effective approach is to analyze the probability and consequences of several classes of operations representative of operations on each critical assembly machine and envelope the bounding case accidents. Although the neutron physics of these machines varies widely, the operations performed at LACEF fall into four operational modes: steady-state mode, approach-to-critical mode, prompt burst mode, and nuclear material operations, which can include critical assembly fuel loading. The operational sequences of each mode are very nearly identical, whether operated on one assembly machine or another. The use of an envelope approach to accident analysis is facilitated by the use of classes of operations and the use of bounding case consequence analysis. A simple fault tree analysis of operational modes helps resolve which operations are sensitive to human error and which are initiated by hardware of software failures. Where possible, these errors and failures are blocked by TSR LCOs. Future work will determine the probability of accidents with various initiators

  3. Burn-up Credit Criticality Safety Benchmark Phase III-C. Nuclide Composition and Neutron Multiplication Factor of a Boiling Water Reactor Spent Fuel Assembly for Burn-up Credit and Criticality Control of Damaged Nuclear Fuel

    International Nuclear Information System (INIS)

    Suyama, K.; Uchida, Y.; Kashima, T.; Ito, T.; Miyaji, T.

    2016-01-01

    Criticality control of damaged nuclear fuel is one of the key issues in the decommissioning operation of the Fukushima Daiichi Nuclear Power Station accident. The average isotopic composition of spent nuclear fuel as a function of burn-up is required in order to evaluate criticality parameters of the mixture of damaged nuclear fuel with other materials. The NEA Expert Group on Burn-up Credit Criticality (EGBUC) has organised several international benchmarks to assess the accuracy of burn-up calculation methodologies. For BWR fuel, the Phase III-B benchmark, published in 2002, was a remarkable landmark that provided general information on the burn-up properties of BWR spent fuel based on the 8x8 type fuel assembly. Since the publication of the Phase III-B benchmark, all major nuclear data libraries have been revised; in Japan from JENDL-3.2 to JENDL-4, in Europe from JEF-2.2 to JEFF-3.1 and in the US from ENDF/B-VI to ENDF/B-VII.1. Burn-up calculation methodologies have been improved by adopting continuous-energy Monte Carlo codes and modern neutronics calculation methods. Considering the importance of the criticality control of damaged fuel in the Fukushima Daiichi Nuclear Power Station accident, a new international burn-up calculation benchmark for the 9 x 9 STEP-3 BWR fuel assemblies was organised to carry out the inter-comparison of the averaged isotopic composition in the interest of the burnup credit criticality safety community. Benchmark specifications were proposed and approved at the EGBUC meeting in September 2012 and distributed in October 2012. The deadline for submitting results was set at the end of February 2013. The basic model for the benchmark problem is an infinite two-dimensional array of BWR fuel assemblies consisting of a 9 x 9 fuel rod array with a water channel in the centre. The initial uranium enrichment of fuel rods without gadolinium is 4.9, 4.4, 3.9, 3.4 and 2.1 wt% and 3.4 wt% for the rods using gadolinium. The burn-up conditions are

  4. Bench-mark experiments to study the neutron distribution in a heterogeneous reactor shielding

    International Nuclear Information System (INIS)

    Bolyatko, V.V.; Vyrskij, M.Yu.; Mashkovich, V.P.; Nagaev, R.Kh.; Prit'mov, A.P.; Sakharov, V.K.; Troshin, V.S.; Tikhonov, E.G.

    1981-01-01

    The bench-mark experiments performed at the B-2 facility of the BR-10 reactor to investigate the spatial and energy neutron distributions are described. The experimental facility includes the neutron beam channel with a slide, a mo shielding composition investigated consisted of sequential layers of steel (1KH18N9T) and graphite slabs. The neutron spectra were measured by activation method, a set of treshold and resonance detectors having been used. The detectors made it possible to obtain the absolute neutron spectra in the 1.4 eV-10 MeV range. The comparison of calculations with the results of the bench-mark experiments made it possible to prove the neutron transport calculational model realized in the ROZ-9 and ARAMAKO-2F computer codes and evaluate the validity of the ARAMAKO constants for the class of shielding compositions in question [ru

  5. Benchmark calculation for water reflected STACY cores containing low enriched uranyl nitrate solution

    International Nuclear Information System (INIS)

    Miyoshi, Yoshinori; Yamamoto, Toshihiro; Nakamura, Takemi

    2001-01-01

    In order to validate the availability of criticality calculation codes and related nuclear data library, a series of fundamental benchmark experiments on low enriched uranyl nitrate solution have been performed with a Static Experiment Criticality Facility, STACY in JAERI. The basic core composed of a single tank with water reflector was used for accumulating the systematic data with well-known experimental uncertainties. This paper presents the outline of the core configurations of STACY, the standard calculation model, and calculation results with a Monte Carlo code and JENDL 3.2 nuclear data library. (author)

  6. Forecast of criticality experiments and experimental programs needed to support nuclear operations in the United States of America: 1994-1999

    International Nuclear Information System (INIS)

    Rutherford, D.

    1995-01-01

    This Forecast is generated by the Chair of the Experiment Needs Identification Workgroup (ENIWG), with input from Department of Energy and the nuclear community. One of the current concerns addressed by ENIWG was the Defense Nuclear Facilities Safety Board's Recommendation 93-2. This Recommendation delineated the need for a critical experimental capability, which includes (1) a program of general-purpose experiments, (2) improving the information base, and (3) ongoing departmental programs. The nuclear community also recognizes the importance of criticality theory, which, as a stepping stone to computational analysis and safety code development, needs to be benchmarked against well-characterized critical experiments. A summary projection of the Department's needs with respect to criticality information includes (1) hands-on training, (2) criticality and nuclear data, (3) detector systems, (4) uranium- and plutonium-based reactors, and (5) accident analysis. The Workgroup has evaluated, prioritized, and categorized each proposed experiment and program. Transportation/Applications is a new category intended to cover the areas of storage, training, emergency response, and standards. This category has the highest number of priority-1 experiments (nine). Facilities capable of performing experiments include the Los Alamos Critical Experiment Facility (LACEF) along with Area V at Sandia National Laboratory. The LACEF continues to house the most significant collection of critical assemblies in the Western Hemisphere. The staff of this facility and Area V are trained and certified, and documentation is current. ENIWG will continue to work with the nuclear community to identify and prioritize experiments because there is an overwhelming need for critical experiments to be performed for basic research and code validation

  7. Forecast of criticality experiments and experimental programs needed to support nuclear operations in the United States of America: 1994--1999

    International Nuclear Information System (INIS)

    Rutherford, D.

    1994-03-01

    This Forecast is generated by the Chair of the Experiment Needs Identification Workgroup (ENIWG), with input from Department of Energy and the nuclear community. One of the current concerns addressed by ENIWG was the Defense Nuclear Facilities Safety Board's Recommendation 93-2. This Recommendation delineated the need for a critical experimental capability, which includes (1) a program of general-purpose experiments, (2) improving the information base, and (3) ongoing departmental programs. The nuclear community also recognizes the importance of criticality theory, which, as a stepping stone to computational analysis and safety code development, needs to be benchmarked against well-characterized critical experiments. A summary project of the Department's needs with respect to criticality information includes (1) hands-on training, (2) criticality and nuclear data, (3) detector systems, (4) uranium- and plutonium-based reactors, and (5) accident analysis. The Workgroup has evaluated, prioritized, and categorized each proposed experiment and program. Transportation/Applications is a new category intended to cover the areas of storage, training, emergency response, and standards. This category has the highest number of priority-1 experiments (nine). Facilities capable of performing experiments include the Los Alamos Critical Experiment Facility (LACEF) along with Area V at Sandia National Laboratory. The LACEF continues to house the most significant collection of critical assemblies in the Western Hemisphere. The staff of this facility and Area V are trained and certified, and documentation is current. ENIWG will continue to work with the nuclear community to identify and prioritize experiments because there is an overwhelming need for critical experiments to be performed for basic research and code validation

  8. Risk management for operations of the Los Alamos critical experiments facility

    International Nuclear Information System (INIS)

    Paternoster, R.; Butterfield, K.

    1998-01-01

    The Los Alamos Critical Experiments Facility (LACEF) currently operates two burst reactors (Godiva-IV and Skua), one solution assembly (SHEBA 2--Solution high-Energy Burst Assembly), two fast-spectrum benchmark assemblies (Flattop and Big Ten), and five general-purpose remote assembly machines which may be configured with nuclear materials and assembled by remote control. SNM storage vaults support these and other operations at the site. With this diverse set of operations, several approaches are possible in the analysis and management of risk. The most conservative approach would be to write a safety analysis report (SAR) for each assembly and experiment. A more cost-effective approach is to analyze the probability and consequences of several classes of operations representative of operations on each critical assembly machine and envelope the bounding case accidents. Although the neutron physics of these machines varies widely, the operations performed at LACEF fall into four operational modes: steady-state mode, approach-to-critical mode, prompt burst mode, and nuclear material operations which can include critical assembly fuel loading. The operational sequences of each mode are very nearly the same, whether operated on one assembly machine or another. The use of an envelope approach to accident analysis is facilitated by the use of classes of operations and the use of bounding case consequence analysis. A simple fault tree analysis of operational modes helps resolve which operations are sensitive to human error and which are initiated by hardware of software failures. Where possible, these errors and failures are blocked by TSR LCOs

  9. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  10. Processing and benchmarking of evaluated nuclear data file/b-viii.0β4 cross-section library by analysis of a series of critical experimental benchmark using the monte carlo code MCNP(X and NJOY2016

    Directory of Open Access Journals (Sweden)

    Kabach Ouadie

    2017-12-01

    Full Text Available To validate the new Evaluated Nuclear Data File (ENDF/B-VIII.0β4 library, 31 different critical cores were selected and used for a benchmark test of the important parameter keff. The four utilized libraries are processed using Nuclear Data Processing Code (NJOY2016. The results obtained with the ENDF/B-VIII.0β4 library were compared against those calculated with ENDF/B-VI.8, ENDF/B-VII.0, and ENDF/B-VII.1 libraries using the Monte Carlo N-Particle (MCNP(X code. All the MCNP(X calculations of keff values with these four libraries were compared with the experimentally measured results, which are available in the International Critically Safety Benchmark Evaluation Project. The obtained results are discussed and analyzed in this paper.

  11. A Heterogeneous Medium Analytical Benchmark

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1999-01-01

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results

  12. Validation of SCALE-4 criticality sequences using ENDF/B-V data

    International Nuclear Information System (INIS)

    Bowman, S.M.; Wright, R.Q.; DeHart, M.D.; Taniuchi, H.

    1993-01-01

    The SCALE code system developed at Oak Ridge National Laboratory contains criticality safety analysis sequences that include the KENO V.a Monte Carlo code for calculation of the effective multiplication factor. These sequences are widely used for criticality safety analyses performed both in the United States and abroad. The purpose of the current work is to validate the SCALE-4 criticality sequences with an ENDF/B-V cross-section library for future distribution with SCALE-4. The library used for this validation is a broad-group library (44 groups) collapsed from the 238-group SCALE library. Extensive data testing of both the 238-group and the 44-group libraries included 10 fast and 18 thermal CSEWG benchmarks and 5 other fast benchmarks. Both libraries contain approximately 300 nuclides and are, therefore, capable of modeling most systems, including those containing spent fuel or radioactive waste. The validation of the broad-group library used 93 critical experiments as benchmarks. The range of experiments included 60 light-water-reactor fuel rod lattices, 13 mixed-oxide fuel rod lattice, and 15 other low- and high-enriched uranium critical assemblies

  13. Benchmarks for GADRAS performance validation

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L. Jr.

    2009-01-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  14. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  15. Cross-section sensitivity and uncertainty analysis of the FNG copper benchmark experiment

    Energy Technology Data Exchange (ETDEWEB)

    Kodeli, I., E-mail: ivan.kodeli@ijs.si [Jožef Stefan Institute, Jamova 39, SI-1000 Ljubljana (Slovenia); Kondo, K. [Karlsruhe Institute of Technology, Postfach 3640, D-76021 Karlsruhe (Germany); Japan Atomic Energy Agency, Rokkasho-mura (Japan); Perel, R.L. [Racah Institute of Physics, Hebrew University of Jerusalem, IL-91904 Jerusalem (Israel); Fischer, U. [Karlsruhe Institute of Technology, Postfach 3640, D-76021 Karlsruhe (Germany)

    2016-11-01

    A neutronics benchmark experiment on copper assembly was performed end 2014–beginning 2015 at the 14-MeV Frascati neutron generator (FNG) of ENEA Frascati with the objective to provide the experimental database required for the validation of the copper nuclear data relevant for ITER design calculations, including the related uncertainties. The paper presents the pre- and post-analysis of the experiment performed using cross-section sensitivity and uncertainty codes, both deterministic (SUSD3D) and Monte Carlo (MCSEN5). Cumulative reaction rates and neutron flux spectra, their sensitivity to the cross sections, as well as the corresponding uncertainties were estimated for different selected detector positions up to ∼58 cm in the copper assembly. This permitted in the pre-analysis phase to optimize the geometry, the detector positions and the choice of activation reactions, and in the post-analysis phase to interpret the results of the measurements and the calculations, to conclude on the quality of the relevant nuclear cross-section data, and to estimate the uncertainties in the calculated nuclear responses and fluxes. Large uncertainties in the calculated reaction rates and neutron spectra of up to 50%, rarely observed at this level in the benchmark analysis using today's nuclear data, were predicted, particularly high for fast reactions. Observed C/E (dis)agreements with values as low as 0.5 partly confirm these predictions. Benchmark results are therefore expected to contribute to the improvement of both cross section as well as covariance data evaluations.

  16. Benchmarking the Particle Background in the LHC Experiments

    CERN Document Server

    Gschwendtner, E

    2000-01-01

    The experiments for the Large Hadron Collider LHC at CERN have to work for 15 years in the presence of a very high particle background of photons in the energy range from 100\\,keV to 10\\,MeV and neutrons in the range from thermal energies ($\\approx 0.025\\,$eV) to 20\\,MeV. \\\\ The background is so high that it becomes a major design criterion for the ATLAS ex\\-peri\\-ment, a general purpose experiment at LHC that will be operational in the year 2005. The exact level of this background is poorly known. At present an uncertainty factor of five has to be assumed to which the limited knowledge of the shower processes in the absorber material and the ensueing neutron and photon production is estimated to contribute with a factor 2.5. \\\\ So far, the background has been assessed only through extensive Monte Carlo evaluation with the particle transport code FLUKA. The lack of relevant measurements, which were not done up to now, are to a large extent responsible for this uncertainty. Hence it is essential to benchmark t...

  17. Forecast of criticality experiments and experimental programs needed to support nuclear operations in the United States of America: 1994--1999

    Energy Technology Data Exchange (ETDEWEB)

    Rutherford, D.

    1994-03-01

    This Forecast is generated by the Chair of the Experiment Needs Identification Workgroup (ENIWG), with input from Department of Energy and the nuclear community. One of the current concerns addressed by ENIWG was the Defense Nuclear Facilities Safety Board`s Recommendation 93-2. This Recommendation delineated the need for a critical experimental capability, which includes (1) a program of general-purpose experiments, (2) improving the information base, and (3) ongoing departmental programs. The nuclear community also recognizes the importance of criticality theory, which, as a stepping stone to computational analysis and safety code development, needs to be benchmarked against well-characterized critical experiments. A summary project of the Department`s needs with respect to criticality information includes (1) hands-on training, (2) criticality and nuclear data, (3) detector systems, (4) uranium- and plutonium-based reactors, and (5) accident analysis. The Workgroup has evaluated, prioritized, and categorized each proposed experiment and program. Transportation/Applications is a new category intended to cover the areas of storage, training, emergency response, and standards. This category has the highest number of priority-1 experiments (nine). Facilities capable of performing experiments include the Los Alamos Critical Experiment Facility (LACEF) along with Area V at Sandia National Laboratory. The LACEF continues to house the most significant collection of critical assemblies in the Western Hemisphere. The staff of this facility and Area V are trained and certified, and documentation is current. ENIWG will continue to work with the nuclear community to identify and prioritize experiments because there is an overwhelming need for critical experiments to be performed for basic research and code validation.

  18. A Global Vision over Benchmarking Process: Benchmarking Based Enterprises

    OpenAIRE

    Sitnikov, Catalina; Giurca Vasilescu, Laura

    2008-01-01

    Benchmarking uses the knowledge and the experience of others to improve the enterprise. Starting from the analysis of the performance and underlying the strengths and weaknesses of the enterprise it should be assessed what must be done in order to improve its activity. Using benchmarking techniques, an enterprise looks at how processes in the value chain are performed. The approach based on the vision “from the whole towards the parts” (a fragmented image of the enterprise’s value chain) redu...

  19. Evaluation of the HTR-10 Reactor as a Benchmark for Physics Code QA

    International Nuclear Information System (INIS)

    William K. Terry; Soon Sam Kim; Leland M. Montierth; Joshua J. Cogliati; Abderrafi M. Ougouag

    2006-01-01

    The HTR-10 is a small (10 MWt) pebble-bed research reactor intended to develop pebble-bed reactor (PBR) technology in China. It will be used to test and develop fuel, verify PBR safety features, demonstrate combined electricity production and co-generation of heat, and provide experience in PBR design, operation, and construction. As the only currently operating PBR in the world, the HTR-10 can provide data of great interest to everyone involved in PBR technology. In particular, if it yields data of sufficient quality, it can be used as a benchmark for assessing the accuracy of computer codes proposed for use in PBR analysis. This paper summarizes the evaluation for the International Reactor Physics Experiment Evaluation Project (IRPhEP) of data obtained in measurements of the HTR-10's initial criticality experiment for use as benchmarks for reactor physics codes

  20. FENDL-3 benchmark test with neutronics experiments related to fusion in Japan

    International Nuclear Information System (INIS)

    Konno, Chikara; Ohta, Masayuki; Takakura, Kosuke; Ochiai, Kentaro; Sato, Satoshi

    2014-01-01

    Highlights: •We have benchmarked FENDL-3.0 with integral experiments with DT neutron sources in Japan. •The FENDL-3.0 is as accurate as FENDL-2.1 and JENDL-4.0 or more. •Some data in FENDL-3.0 may have some problems. -- Abstract: The IAEA supports and promotes the gathering of the best data from evaluated nuclear data libraries for each nucleus involved in fusion reactor applications and compiles these data as FENDL. In 2012, the IAEA released a major update to FENDL, FENDL-3.0, which extends the neutron energy range from 20 MeV to greater than 60 MeV for 180 nuclei. We have benchmarked FENDL-3.0 versus in situ and TOF experiments using the DT neutron source at FNS at the JAEA and TOF experiments using the DT neutron source at OKTAVIAN at Osaka University in Japan. The Monte Carlo code MCNP-5 and the ACE file of FENDL-3.0 supplied from the IAEA were used for the calculations. The results were compared with measured ones and those obtained using the previous version, FENDL-2.1, and the latest version, JENDL-4.0. It is concluded that FENDL-3.0 is as accurate as or more so than FENDL-2.1 and JENDL-4.0, although some data in FENDL-3.0 may be problematic

  1. Application of an integrated PC-based neutronics code system to criticality safety

    International Nuclear Information System (INIS)

    Briggs, J.B.; Nigg, D.W.

    1991-01-01

    An integrated system of neutronics and radiation transport software suitable for operation in an IBM PC-class environment has been under development at the Idaho National Engineering Laboratory (INEL) for the past four years. Four modules within the system are particularly useful for criticality safety applications. Using the neutronics portion of the integrated code system, effective neutron multiplication values (k eff values) have been calculated for a variety of benchmark critical experiments for metal systems (Plutonium and Uranium), Aqueous Systems (Plutonium and Uranium) and LWR fuel rod arrays. A description of the codes and methods used in the analysis and the results of the benchmark critical experiments are presented in this paper. In general, excellent agreement was found between calculated and experimental results. (Author)

  2. Benchmarking

    OpenAIRE

    Meylianti S., Brigita

    1999-01-01

    Benchmarking has different meaning to different people. There are five types of benchmarking, namely internal benchmarking, competitive benchmarking, industry / functional benchmarking, process / generic benchmarking and collaborative benchmarking. Each type of benchmarking has its own advantages as well as disadvantages. Therefore it is important to know what kind of benchmarking is suitable to a specific application. This paper will discuss those five types of benchmarking in detail, includ...

  3. Compilation report of VHTRC temperature coefficient benchmark calculations

    Energy Technology Data Exchange (ETDEWEB)

    Yasuda, Hideshi; Yamane, Tsuyoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1995-11-01

    A calculational benchmark problem has been proposed by JAERI to an IAEA Coordinated Research Program, `Verification of Safety Related Neutronic Calculation for Low-enriched Gas-cooled Reactors` to investigate the accuracy of calculation results obtained by using codes of the participating countries. This benchmark is made on the basis of assembly heating experiments at a pin-in block type critical assembly, VHTRC. Requested calculation items are the cell parameters, effective multiplication factor, temperature coefficient of reactivity, reaction rates, fission rate distribution, etc. Seven institutions from five countries have joined the benchmark works. Calculation results are summarized in this report with some remarks by the authors. Each institute analyzed the problem by applying the calculation code system which was prepared for the HTGR development of individual country. The values of the most important parameter, k{sub eff}, by all institutes showed good agreement with each other and with the experimental ones within 1%. The temperature coefficient agreed within 13%. The values of several cell parameters calculated by several institutes did not agree with the other`s ones. It will be necessary to check the calculation conditions again for getting better agreement. (J.P.N.).

  4. Nonlinear Resonance Benchmarking Experiment at the CERN Proton Synchrotron

    CERN Document Server

    Hofmann, I; Giovannozzi, Massimo; Martini, M; Métral, Elias

    2003-01-01

    As a first step of a space charge - nonlinear resonance benchmarking experiment over a large number of turns, beam loss and emittance evolution were measured over 1 s on a 1.4 GeV kinetic energy flat-bottom in the presence of a single octupole. By lowering the working point towards the resonance a gradual transition from a loss-free core emittance blow-up to a regime dominated by continuous loss was found. Our 3D simulations with analytical space charge show that trapping on the resonance due to synchrotron oscillation causes the observed core emittance growth as well as halo formation, where the latter is explained as the source of the observed loss.

  5. Benchmarking of venous thromboembolism prophylaxis practice with ENT.UK guidelines.

    Science.gov (United States)

    Al-Qahtani, Ali S

    2017-05-01

    The aim of this study was to benchmark our guidelines of prevention of venous thromboembolism (VTE) in ENT surgical population against ENT.UK guidelines, and also to encourage healthcare providers to utilize benchmarking as an effective method of improving performance. The study design is prospective descriptive analysis. The setting of this study is tertiary referral centre (Assir Central Hospital, Abha, Saudi Arabia). In this study, we are benchmarking our practice guidelines of the prevention of VTE in the ENT surgical population against that of ENT.UK guidelines to mitigate any gaps. ENT guidelines 2010 were downloaded from the ENT.UK Website. Our guidelines were compared with the possibilities that either our performance meets or fall short of ENT.UK guidelines. Immediate corrective actions will take place if there is quality chasm between the two guidelines. ENT.UK guidelines are evidence-based and updated which may serve as role-model for adoption and benchmarking. Our guidelines were accordingly amended to contain all factors required in providing a quality service to ENT surgical patients. While not given appropriate attention, benchmarking is a useful tool in improving quality of health care. It allows learning from others' practices and experiences, and works towards closing any quality gaps. In addition, benchmarking clinical outcomes is critical for quality improvement and informing decisions concerning service provision. It is recommended to be included on the list of quality improvement methods of healthcare services.

  6. Thermal reactor benchmark tests on JENDL-2

    International Nuclear Information System (INIS)

    Takano, Hideki; Tsuchihashi, Keichiro; Yamane, Tsuyoshi; Akino, Fujiyoshi; Ishiguro, Yukio; Ido, Masaru.

    1983-11-01

    A group constant library for the thermal reactor standard nuclear design code system SRAC was produced by using the evaluated nuclear data JENDL-2. Furthermore, the group constants for 235 U were calculated also from ENDF/B-V. Thermal reactor benchmark calculations were performed using the produced group constant library. The selected benchmark cores are two water-moderated lattices (TRX-1 and 2), two heavy water-moderated cores (DCA and ETA-1), two graphite-moderated cores (SHE-8 and 13) and eight critical experiments for critical safety. The effective multiplication factors and lattice cell parameters were calculated and compared with the experimental values. The results are summarized as follows. (1) Effective multiplication factors: The results by JENDL-2 are considerably improved in comparison with ones by ENDF/B-IV. The best agreement is obtained by using JENDL-2 and ENDF/B-V (only 235 U) data. (2) Lattice cell parameters: For the rho 28 (the ratio of epithermal to thermal 238 U captures) and C* (the ratio of 238 U captures to 235 U fissions), the values calculated by JENDL-2 are in good agreement with the experimental values. The rho 28 (the ratio of 238 U to 235 U fissions) are overestimated as found also for the fast reactor benchmarks. The rho 02 (the ratio of epithermal to thermal 232 Th captures) calculated by JENDL-2 or ENDF/B-IV are considerably underestimated. The functions of the SRAC system have been continued to be extended according to the needs of its users. A brief description will be given, in Appendix B, to the extended parts of the SRAC system together with the input specification. (author)

  7. Benchmarking in Czech Higher Education

    OpenAIRE

    Plaček Michal; Ochrana František; Půček Milan

    2015-01-01

    The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Base...

  8. Benchmarking the Particle Background in the Large Hadron Collider Experiments

    CERN Document Server

    Gschwendtner, Edda; Fabjan, Christian Wolfgang; Hessey, N P; Otto, Thomas

    2002-01-01

    Background benchmarking measurements have been made to check the low-energy processes which will contribute via nuclear reactions to the radiation background in the LHC experiments at CERN. Previously these processes were only evaluated with Monte Carlo simulations, estimated to be reliable within an uncertainty factor of 2.5. Measurements were carried out in an experimental set-up comparable to the shielding of ATLAS, one of the general-purpose experiments at LHC. The absolute yield and spectral measurements of photons and neutrons emanating from the final stages of the hadronic showers were made with a Bi_4Ge_3O_{12} (BGO) detector. The particle transport code FLUKA was used for detailed simulations. Comparison between measurements and simulations show that they agree within 20% and hence the uncertainty factor resulting from the shower processes can be reduced to a factor of 1.2.

  9. Analysis of the European results on the HTTR's core physics benchmarks

    International Nuclear Information System (INIS)

    Raepsaet, X.; Damian, F.; Ohlig, U.A.; Brockmann, H.J.; Haas, J.B.M. de; Wallerboss, E.M.

    2002-01-01

    Within the frame of the European contract HTR-N1 calculations are performed on the benchmark problems of the HTTR's start-up core physics experiments initially proposed by the IAEA in a Co-ordinated Research Programme. Three European partners, the FZJ in Germany, NRG and IRI in the Netherlands, and CEA in France, have joined this work package with the aim to validate their calculational methods. Pre-test and post-test calculational results, obtained by the partners, are compared with each other and with the experiment. Parts of the discrepancies between experiment and pre-test predictions are analysed and tackled by different treatments. In the case of the Monte Carlo code TRIPOLI4, used by CEA, the discrepancy between measurement and calculation at the first criticality is reduced to Δk/k∼0.85%, when considering the revised data of the HTTR benchmark. In the case of the diffusion codes, this discrepancy is reduced to: Δk/k∼0.8% (FZJ) and 2.7 or 1.8% (CEA). (author)

  10. Benchmark calculation of SCALE-PC 4.3 CSAS6 module and burnup credit criticality analysis

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Hee Sung; Ro, Seong Gy; Shin, Young Joon; Kim, Ik Soo [Korea Atomic Energy Research Institute, Taejon (Korea)

    1998-12-01

    Calculation biases of SCALE-PC CSAS6 module for PWR spent fuel, metallized spent fuel and solution of nuclear materials have been determined on the basis of the benchmark to be 0.01100, 0.02650 and 0.00997, respectively. With the aid of the code system, nuclear criticality safety analysis for the spent fuel storage pool has been carried out to determine the minimum burnup of spent fuel required for safe storage. The criticality safety analysis is performed using three types of isotopic composition of spent fuel: ORIGEN2-calculated isotopic compositions; the conservative inventory obtained from the multiplication of ORIGEN2-calculated isotopic compositions by isotopic correction factors; the conservative inventory of only U, Pu and {sup 241}Am. The results show that the minimum burnup for three cases are 990,6190 and 7270 MWd/tU, respectively in the case of 5.0 wt% initial enriched spent fuel. (author). 74 refs., 68 figs., 35 tabs.

  11. Critical experiment study on uranyl nitrate solution experiment facility

    International Nuclear Information System (INIS)

    Zhu Qingfu; Shi Yongqian; Wang Jinrong

    2005-01-01

    The Uranyl Nitrate Solution Experiment Facility was constructed for the research on nuclear criticality safety. In this paper, the configuration of the facility is introduced; a series of critical experiments on uranyl nitrate solution is described later, which were performed for various uranium concentrations under different conditions, i.e. with or without neutron absorbers in the core and with or without water-reflector outside the core. Critical volume and the minimum 235U critical mass for different uranium concentrations are presented. Finally, theoretical analysis is made on the experimental results. (authors)

  12. Automatic generation of 3D fine mesh geometries for the analysis of the venus-3 shielding benchmark experiment with the Tort code

    International Nuclear Information System (INIS)

    Pescarini, M.; Orsi, R.; Martinelli, T.

    2003-01-01

    In many practical radiation transport applications today the cost for solving refined, large size and complex multi-dimensional problems is not so much computing but is linked to the cumbersome effort required by an expert to prepare a detailed geometrical model, verify and validate that it is correct and represents, to a specified tolerance, the real design or facility. This situation is, in particular, relevant and frequent in reactor core criticality and shielding calculations, with three-dimensional (3D) general purpose radiation transport codes, requiring a very large number of meshes and high performance computers. The need for developing tools that make easier the task to the physicist or engineer, by reducing the time required, by facilitating through effective graphical display the verification of correctness and, finally, that help the interpretation of the results obtained, has clearly emerged. The paper shows the results of efforts in this field through detailed simulations of a complex shielding benchmark experiment. In the context of the activities proposed by the OECD/NEA Nuclear Science Committee (NSC) Task Force on Computing Radiation Dose and Modelling of Radiation-Induced Degradation of Reactor Components (TFRDD), the ENEA-Bologna Nuclear Data Centre contributed with an analysis of the VENUS-3 low-flux neutron shielding benchmark experiment (SCK/CEN-Mol, Belgium). One of the targets of the work was to test the BOT3P system, originally developed at the Nuclear Data Centre in ENEA-Bologna and actually released to OECD/NEA Data Bank for free distribution. BOT3P, ancillary system of the DORT (2D) and TORT (3D) SN codes, permits a flexible automatic generation of spatial mesh grids in Cartesian or cylindrical geometry, through combinatorial geometry algorithms, following a simplified user-friendly approach. This system demonstrated its validity also in core criticality analyses, as for example the Lewis MOX fuel benchmark, permitting to easily

  13. Shielding benchmark problems

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Kawai, Masayoshi; Nakazawa, Masaharu.

    1978-09-01

    Shielding benchmark problems were prepared by the Working Group of Assessment of Shielding Experiments in the Research Comittee on Shielding Design of the Atomic Energy Society of Japan, and compiled by the Shielding Laboratory of Japan Atomic Energy Research Institute. Twenty-one kinds of shielding benchmark problems are presented for evaluating the calculational algorithm and the accuracy of computer codes based on the discrete ordinates method and the Monte Carlo method and for evaluating the nuclear data used in the codes. (author)

  14. Benchmark calculation of subchannel analysis codes

    International Nuclear Information System (INIS)

    1996-02-01

    In order to evaluate the analysis capabilities of various subchannel codes used in thermal-hydraulic design of light water reactors, benchmark calculations were performed. The selected benchmark problems and major findings obtained by the calculations were as follows: (1)As for single-phase flow mixing experiments between two channels, the calculated results of water temperature distribution along the flow direction were agreed with experimental results by tuning turbulent mixing coefficients properly. However, the effect of gap width observed in the experiments could not be predicted by the subchannel codes. (2)As for two-phase flow mixing experiments between two channels, in high water flow rate cases, the calculated distributions of air and water flows in each channel were well agreed with the experimental results. In low water flow cases, on the other hand, the air mixing rates were underestimated. (3)As for two-phase flow mixing experiments among multi-channels, the calculated mass velocities at channel exit under steady-state condition were agreed with experimental values within about 10%. However, the predictive errors of exit qualities were as high as 30%. (4)As for critical heat flux(CHF) experiments, two different results were obtained. A code indicated that the calculated CHF's using KfK or EPRI correlations were well agreed with the experimental results, while another code suggested that the CHF's were well predicted by using WSC-2 correlation or Weisman-Pei mechanistic model. (5)As for droplets entrainment and deposition experiments, it was indicated that the predictive capability was significantly increased by improving correlations. On the other hand, a remarkable discrepancy between codes was observed. That is, a code underestimated the droplet flow rate and overestimated the liquid film flow rate in high quality cases, while another code overestimated the droplet flow rate and underestimated the liquid film flow rate in low quality cases. (J.P.N.)

  15. How benchmarking can improve patient nutrition.

    Science.gov (United States)

    Ellis, Jane

    Benchmarking is a tool that originated in business to enable organisations to compare their services with industry-wide best practice. Early last year the Department of Health published The Essence of Care, a benchmarking toolkit adapted for use in health care. It focuses on eight elements of care that are crucial to patients' experiences. Nurses and other health care professionals at a London NHS trust have begun a trust-wide benchmarking project. The aim is to improve patients' experiences of health care by sharing and comparing information, and by identifying examples of good practice and areas for improvement. The project began with two of the eight elements of The Essence of Care, with the intention of covering the rest later. This article describes the benchmarking process for nutrition and some of the consequent improvements in care.

  16. Neutronic computational modeling of the ASTRA critical facility using MCNPX

    International Nuclear Information System (INIS)

    Rodriguez, L. P.; Garcia, C. R.; Milian, D.; Milian, E. E.; Brayner, C.

    2015-01-01

    The Pebble Bed Very High Temperature Reactor is considered as a prominent candidate among Generation IV nuclear energy systems. Nevertheless the Pebble Bed Very High Temperature Reactor faces an important challenge due to the insufficient validation of computer codes currently available for use in its design and safety analysis. In this paper a detailed IAEA computational benchmark announced by IAEA-TECDOC-1694 in the framework of the Coordinated Research Project 'Evaluation of High Temperature Gas Cooled Reactor (HTGR) Performance' was solved in support of the Generation IV computer codes validation effort using MCNPX ver. 2.6e computational code. In the IAEA-TECDOC-1694 were summarized a set of four calculational benchmark problems performed at the ASTRA critical facility. Benchmark problems include criticality experiments, control rod worth measurements and reactivity measurements. The ASTRA Critical Facility at the Kurchatov Institute in Moscow was used to simulate the neutronic behavior of nuclear pebble bed reactors. (Author)

  17. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-11-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  18. Shielding Benchmark Computational Analysis

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-01-01

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)

  19. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  20. Construction of STACY (Static Experiment Critical Facility)

    International Nuclear Information System (INIS)

    Murakami, Kiyonobu; Onodera, Seiji; Hirose, Hideyuki

    1998-08-01

    Two critical assemblies, STACY (Static Experiment Critical Facility) and TRACY (Transient Experiment Critical Facility), were constructed in NUCEF (Nuclear Fuel Cycle Safety Engineering Research Facility) to promote researches on the criticality safety at a reprocessing facility. STACY aims at providing critical data of uranium nitrate solution, plutonium nitrate solution and their mixture while varying concentration of solution fuel, core tank shape and size and neutron reflecting condition. STACY achieved first criticality in February 1995, and passed the licensing inspection by STA (Science and Technology Agency of Japan) in May. After that a series of critical experiments commenced with 10 w/o enriched uranium solution. This report describes the outline of STACY at the end of FY 1996. (author)

  1. OECD/NEA Burnup Credit Calculational Criticality Benchmark Phase I-B Results

    International Nuclear Information System (INIS)

    DeHart, M.D.

    1993-01-01

    Burnup credit is an ongoing technical concern for many countries that operate commercial nuclear power reactors. In a multinational cooperative effort to resolve burnup credit issues, a Burnup Credit Working Group has been formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development. This working group has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide, and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods are in agreement to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods are within 11% agreement about the average for all fission products studied. Furthermore, most deviations are less than 10%, and many are less than 5%. The exceptions are 149 Sm, 151 Sm, and 155 Gd

  2. Benchmark experiment on vanadium assembly with D-T neutrons. Leakage neutron spectrum measurement

    Energy Technology Data Exchange (ETDEWEB)

    Kokooo; Murata, I.; Nakano, D.; Takahashi, A. [Osaka Univ., Suita (Japan); Maekawa, F.; Ikeda, Y.

    1998-03-01

    The fusion neutronics benchmark experiments have been done for vanadium and vanadium alloy by using the slab assembly and time-of-flight (TOF) method. The leakage neutron spectra were measured from 50 keV to 15 MeV and comparison were done with MCNP-4A calculations which was made by using evaluated nuclear data of JENDL-3.2, JENDL-Fusion File and FENDL/E-1.0. (author)

  3. Benchmark comparisons of evaluated nuclear data files

    International Nuclear Information System (INIS)

    Resler, D.A.; Howerton, R.J.; White, R.M.

    1994-05-01

    With the availability and maturity of several evaluated nuclear data files, it is timely to compare the results of integral tests with calculations using these different files. We discuss here our progress in making integral benchmark tests of the following nuclear data files: ENDL-94, ENDF/B-V and -VI, JENDL-3, JEF-2, and BROND-2. The methods used to process these evaluated libraries in a consistent way into applications files for use in Monte Carlo calculations is presented. Using these libraries, we are calculating and comparing to experiment k eff for 68 fast critical assemblies of 233,235 U and 239 Pu with reflectors of various material and thickness

  4. The Zeus Copper/Uranium Critical Experiment at NCERC

    International Nuclear Information System (INIS)

    Sanchez, Rene G.; Hayes, David K.; Bounds, John Alan; Jackman, Kevin R.; Goda, Joetta M.

    2012-01-01

    A critical experiment was performed to provide nuclear data in a non-thermal neutron spectrum and to reestablish experimental capability relevant to Stockpile Stewardship and Technical Nuclear Forensic programs. Irradiation foils were placed at specific locations in the Zeus all oralloy critical experiment to obtain fission ratios. These ratios were compared with others from other critical assemblies to assess the degree of softness in the neutron spectrum. This critical experiment was performed at the National Criticality Experiments Research Center (NCERC) in Nevada.

  5. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  6. Benchmark test of nuclear data with pulsed sphere experiment using OKTAVIAN

    International Nuclear Information System (INIS)

    Ichihara, Ch.; Hayashi, S.; Yamamoto, J.; Kimura, I.; Tkahashi, A.

    1996-01-01

    Nuclear data files such as JENDL - Fusion File, JENDL - 3.2, ENDF / B -VI, BROND - 2 have been compared to pulsed sphere experiments on 14 samples modeled with the MCNP4A code for the purpose of bench-marking the data libraries. The results are in good agreement for Li, Cr, Mn, Cu and Mo. Satisfying results have been obtained for Zr and Nb with JENDL - Fusion and JENDL - 3.2. For W, quite good results were obtained with ENDF / B - VI. There is a disagreement for the other samples such as LiF, TEFLON, Si, Ti and Co

  7. Integral Full Core Multi-Physics PWR Benchmark with Measured Data

    Energy Technology Data Exchange (ETDEWEB)

    Forget, Benoit; Smith, Kord; Kumar, Shikhar; Rathbun, Miriam; Liang, Jingang

    2018-04-11

    In recent years, the importance of modeling and simulation has been highlighted extensively in the DOE research portfolio with concrete examples in nuclear engineering with the CASL and NEAMS programs. These research efforts and similar efforts worldwide aim at the development of high-fidelity multi-physics analysis tools for the simulation of current and next-generation nuclear power reactors. Like all analysis tools, verification and validation is essential to guarantee proper functioning of the software and methods employed. The current approach relies mainly on the validation of single physic phenomena (e.g. critical experiment, flow loops, etc.) and there is a lack of relevant multiphysics benchmark measurements that are necessary to validate high-fidelity methods being developed today. This work introduces a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading and re-loading patterns. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from 58 instrumented assemblies. The benchmark description is now available online and has been used by many groups. However, much work remains to be done on the quantification of uncertainties and modeling sensitivities. This work aims to address these deficiencies and make this benchmark a true non-proprietary international benchmark for the validation of high-fidelity tools. This report details the BEAVRS uncertainty quantification for the first two cycle of operations and serves as the final report of the project.

  8. Dosimetry at the Los Alamos Critical Experiments Facility: Past, present, and future

    International Nuclear Information System (INIS)

    Malenfant, R.E.

    1993-10-01

    Although the primary reason for the existence of the Los Alamos Critical Experiments Facility is to provide basic data on the physics of systems of fissile material, the physical arrangements and ability to provide sources of radiation have led to applications for all types of radiation dosimetry. In the broad definition of radiation phenomena, the facility has provided sources to evaluate biological effects, radiation shielding and transport, and measurements of basic parameters such as the evaluation of delayed neutron parameters. Within the last 15 years, many of the radiation measurements have been directed to calibration and intercomparison of dosimetry related to nuclear criticality safety. Future plans include (1) the new applications of Godiva IV, a bare-metal pulse assembly, for dosimetry (including an evaluation of neutron and gamma-ray room return); (2) a proposal to relocate the Health Physics Research Reactor from the Oak Ridge National Laboratory to Los Alamos, which will provide the opportunity to continue the application of a primary benchmark source to radiation dosimetry; and (3) a proposal to employ SHEBA, a low-enrichment solution assembly, for accident dosimetry and evaluation

  9. Validating analysis methodologies used in burnup credit criticality calculations

    International Nuclear Information System (INIS)

    Brady, M.C.; Napolitano, D.G.

    1992-01-01

    The concept of allowing reactivity credit for the depleted (or burned) state of pressurized water reactor fuel in the licensing of spent fuel facilities introduces a new challenge to members of the nuclear criticality community. The primary difference in this analysis approach is the technical ability to calculate spent fuel compositions (or inventories) and to predict their effect on the system multiplication factor. Isotopic prediction codes are used routinely for in-core physics calculations and the prediction of radiation source terms for both thermal and shielding analyses, but represent an innovation for criticality specialists. This paper discusses two methodologies currently being developed to specifically evaluate isotopic composition and reactivity for the burnup credit concept. A comprehensive approach to benchmarking and validating the methods is also presented. This approach involves the analysis of commercial reactor critical data, fuel storage critical experiments, chemical assay isotopic data, and numerical benchmark calculations

  10. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    Science.gov (United States)

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  11. OECD/NEA Burnup Credit Calculational Criticality Benchmark Phase I-B Results

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, M.D.

    1993-01-01

    Burnup credit is an ongoing technical concern for many countries that operate commercial nuclear power reactors. In a multinational cooperative effort to resolve burnup credit issues, a Burnup Credit Working Group has been formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development. This working group has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide, and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods are in agreement to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods are within 11% agreement about the average for all fission products studied. Furthermore, most deviations are less than 10%, and many are less than 5%. The exceptions are {sup 149}Sm, {sup 151}Sm, and {sup 155}Gd.

  12. Benchmarking Is Associated With Improved Quality of Care in Type 2 Diabetes

    Science.gov (United States)

    Hermans, Michel P.; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos

    2013-01-01

    OBJECTIVE To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. RESEARCH DESIGN AND METHODS Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. RESULTS Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile. PMID:23846810

  13. Benchmarking gate-based quantum computers

    Science.gov (United States)

    Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans

    2017-11-01

    With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.

  14. Proceedings of the Nuclear Criticality Technology Safety Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Rene G. Sanchez

    1998-04-01

    This document contains summaries of most of the papers presented at the 1995 Nuclear Criticality Technology Safety Project (NCTSP) meeting, which was held May 16 and 17 at San Diego, Ca. The meeting was broken up into seven sessions, which covered the following topics: (1) Criticality Safety of Project Sapphire; (2) Relevant Experiments For Criticality Safety; (3) Interactions with the Former Soviet Union; (4) Misapplications and Limitations of Monte Carlo Methods Directed Toward Criticality Safety Analyses; (5) Monte Carlo Vulnerabilities of Execution and Interpretation; (6) Monte Carlo Vulnerabilities of Representation; and (7) Benchmark Comparisons.

  15. Construction of new critical experiment facilities in JAERI

    International Nuclear Information System (INIS)

    Takeshita, Isao; Itahashi, Takayuki; Ogawa, Kazuhiko; Tonoike, Kotaro; Matsumura, Tatsuro; Miyoshi, Yoshinori; Nakajima, Ken; Izawa, Naoki

    1995-01-01

    Japan Atomic Energy Research Institute (JAERI) has promoted the experiment research program on criticality safety since early in 1980s and two types of new critical facilities, Static Experiment Critical Facility (STACY) and Transient Experiment Critical Facility (TRACY) were completed on 1994 in Nuclear Fuel Cycle Safety Engineering Research Facility (NUCEF) of JAERI Tokai Research Establishment. STACY was designed so as to obtain critical mass data of low enriched uranium and plutonium solution which is extensively handled in LWR fuel reprocessing plant. TRACY is the critical facility where critical accident phenomenon is demonstrated with low enriched uranium nitrate solution. For criticality safety experiments with both facilities, the Fuel Treatment System is attached to them, where composition and concentration of uranium and plutonium nitrate solutions are widely varied so as to obtain experiments data covering fuel solution conditions in reprocessing plant. Design performances of both critical facilities were confirmed through mock-up tests of important components and cold function tests. Hot function test has started since January of 1995 and some of the results on STACY are to be reported. (author)

  16. EBR-II Reactor Physics Benchmark Evaluation Report

    Energy Technology Data Exchange (ETDEWEB)

    Pope, Chad L. [Idaho State Univ., Pocatello, ID (United States); Lum, Edward S [Idaho State Univ., Pocatello, ID (United States); Stewart, Ryan [Idaho State Univ., Pocatello, ID (United States); Byambadorj, Bilguun [Idaho State Univ., Pocatello, ID (United States); Beaulieu, Quinton [Idaho State Univ., Pocatello, ID (United States)

    2017-12-28

    This report provides a reactor physics benchmark evaluation with associated uncertainty quantification for the critical configuration of the April 1986 Experimental Breeder Reactor II Run 138B core configuration.

  17. The OECD/NRC BWR full-size fine-mesh bundle tests benchmark (BFBT)-general description

    International Nuclear Information System (INIS)

    Sartori, Enrico; Hochreiter, L.E.; Ivanov, Kostadin; Utsuno, Hideaki

    2004-01-01

    The need to refine models for best-estimate calculations based on good-quality experimental data have been expressed in many recent meetings in the field of nuclear applications. The needs arising in this respect should not be limited to currently available macroscopic approaches but should be extended to next-generation approaches that focus on more microscopic processes. One most valuable database identified for the thermal-hydraulics modelling was developed by the Nuclear Power Engineering Corporation (NUPEC). Part of this database will be made available for an international benchmark exercise. This fine-mesh high-quality data encourages advancement in the insufficiently developed field of the two-phase flow theory. Considering that the present theoretical approach is relatively immature, the benchmark specification is designed so that it will systematically assess and compare the participants' numerical models on the prediction of detailed void distributions and critical powers. The development of truly mechanistic models for critical power prediction is currently underway. These innovative models should include elementary processes such as void distributions, droplet deposit, liquid film entrainment, etc. The benchmark problem includes both macroscopic and microscopic measurement data. In this context, the sub-channel grade void fraction data are regarded as the macroscopic data, and the digitized computer graphic images are the microscopic data. The proposed benchmark consists of two parts (phases), each part consisting of different exercises: Phase 1- Void distribution benchmark: Exercise 1- Steady-state sub-channel grade benchmark. Exercise 2- Steady-state microscopic grade benchmark. Exercise 3-Transient macroscopic grade benchmark. Phase 2-Critical power benchmark: Exercise 1-Steady-state benchmark. Exercise 2-Transient benchmark. (author)

  18. Analogue experiments as benchmarks for models of lava flow emplacement

    Science.gov (United States)

    Garel, F.; Kaminski, E. C.; Tait, S.; Limare, A.

    2013-12-01

    During an effusive volcanic eruption, the crisis management is mainly based on the prediction of lava flow advance and its velocity. The spreading of a lava flow, seen as a gravity current, depends on its "effective rheology" and on the effusion rate. Fast-computing models have arisen in the past decade in order to predict in near real time lava flow path and rate of advance. This type of model, crucial to mitigate volcanic hazards and organize potential evacuation, has been mainly compared a posteriori to real cases of emplaced lava flows. The input parameters of such simulations applied to natural eruptions, especially effusion rate and topography, are often not known precisely, and are difficult to evaluate after the eruption. It is therefore not straightforward to identify the causes of discrepancies between model outputs and observed lava emplacement, whereas the comparison of models with controlled laboratory experiments appears easier. The challenge for numerical simulations of lava flow emplacement is to model the simultaneous advance and thermal structure of viscous lava flows. To provide original constraints later to be used in benchmark numerical simulations, we have performed lab-scale experiments investigating the cooling of isoviscous gravity currents. The simplest experimental set-up is as follows: silicone oil, whose viscosity, around 5 Pa.s, varies less than a factor of 2 in the temperature range studied, is injected from a point source onto a horizontal plate and spreads axisymmetrically. The oil is injected hot, and progressively cools down to ambient temperature away from the source. Once the flow is developed, it presents a stationary radial thermal structure whose characteristics depend on the input flow rate. In addition to the experimental observations, we have developed in Garel et al., JGR, 2012 a theoretical model confirming the relationship between supply rate, flow advance and stationary surface thermal structure. We also provide

  19. HELIOS2: Benchmarking Against Experiments for Hexagonal and Square Lattices

    International Nuclear Information System (INIS)

    Simeonov, T.

    2009-01-01

    HELIOS2, is a 2D transport theory program for fuel burnup and gamma-flux calculation. It solves the neutron and gamma transport equations in a general, two-dimensional geometry bounded by a polygon of straight lines. The applied transport solver may be chosen between: The Method of Collision Probabilities and The Method of Characteristics. The former is well known for its successful application for preparation of cross section data banks for 3D simulators for all types lattices for WWER's, PWR's, BWR's, AGR's, RBMK and CANDU reactors. The later, method of characteristics, helps in the areas where the requirements of collision probability for computational power become too large of practical application. The application of HELIOS2 and The method of characteristics for some large from calculation point of view benchmarks is presented in this paper. The analysis combines comparisons to measured data from the Hungarian ZR-6 reactor and JAERI's facility of tanktype critical assembly to verify and validate HELIOS2 and method of characteristics for WWER assembly imitators; configurations with different absorber types-ZrB2, B4C, Eu2O3 and Gd2O3; and critical configurations with stainless steel in the reflector. Core eigenvalues and reaction rates are compared. With the account for the uncertainties the results are generally excellent. Special place in this paper is given to the effect of Iron-made radial reflector. Comparisons to measurements from The Temporary International Collective and tanktype critical assembly for stainless steel and Iron reflected cores are presented. The calculated by HELIOS-2 reactivity effect is in very good agreement with the measurements. (Authors)

  20. Calculation of Upper Subcritical Limits for Nuclear Criticality in a Repository

    International Nuclear Information System (INIS)

    J.W. Pegram

    1998-01-01

    The purpose of this document is to present the methodology to be used for development of the Subcritical Limit (SL) for post closure conditions for the Yucca Mountain repository. The SL is a value based on a set of benchmark criticality multiplier, k eff results that are outputs of the MCNP calculation method. This SL accounts for calculational biases and associated uncertainties resulting from the use of MCNP as the method of assessing k eff . The context for an SL estimate include the range of applicability (based on the set of MCNP results) and the type of SL required for the application at hand. This document will include illustrative calculations for each of three approaches. The data sets used for the example calculations are identified in Section 5.1. These represent three waste categories, and SLs for each of these sets of experiments will be computed in this document. Future MCNP data sets will be analyzed using the methods discussed here. The treatment of the biases evaluated on sets of k eff results via MCNP is statistical in nature. This document does not address additional non-statistical contributions to the bias margin, acknowledging that regulatory requirements may impose additional administrative penalties. Potentially, there are other biases or margins that should be accounted for when assessing criticality (k eff ). Only aspects of the bias as determined using the stated assumptions and benchmark critical data sets will be included in the methods and sample calculations in this document. The set of benchmark experiments used in the validation of the computational system should be representative of the composition, configuration, and nuclear characteristics for the application at hand. In this work, a range of critical experiments will be the basis of establishing the SL for three categories of waste types that will be in the repository. The ultimate purpose of this document is to present methods that will effectively characterize the MCNP

  1. Critical experiments of JMTRC MEU cores

    International Nuclear Information System (INIS)

    Nagaoka, Y.; Takeda, K.; Shimakawa, S.; Koike, S.; Oyamada, R.

    1984-01-01

    The JMTRC, the critical facility of the Japan Materials Testing Reactor (JMTR), went critical on August 29, 1983, with 14 medium enriched uranium (MEU, 45%) fuel elements. Experiments are now being carried out to measure the change in various reactor characteristics between the previous HEU core and the new MEU fueled core. This paper describes the results obtained thus far on critical mass, excess reactivity, control rod worths and flux distribution, including preliminary neutronics calculations for the experiments using the SRAC code. (author)

  2. Benchmark experiments on a lead reflected system and calculations on the geometry of the experimental facility using most of the commonly available nuclear data libraries

    International Nuclear Information System (INIS)

    Guillemot, M.; Colomb, G.

    1985-01-01

    A series of criticality benchmark experiments with a small LWR-type core, reflected by 30 cm of lead, was defined jointly by SEC (Service d'Etude de Criticite), Fontenay-aux-Roses, and SRD (Safety and Reliability Directorate). These experiments are very representative of the reflecting effect of lead, since the contribution of the lead to the reactivity was assessed as about 30% in Δ K. The experiments were carried out by SRSC (Service de Recherche en Surete et Criticite), Valduc, in December 1983 in the sub-critical facility called APPARATUS B. In addition, they confirmed and measured the effect on reactivity of a water gap between the core and the lead reflector; with a water gap of less than 1 cm, the reactivity can be greater than that of the core directly reflected the lead or by over 20 cm of water. The experimental results were to a large extent made use of by SRD with the aid of the MONK Monte Carlo code and to some extent by SEC with the aid of the MORET Monte Carlo Code. All the results obtained are presented in the summary tables. These experiments allowed to compare the different libraries of cross sections available

  3. Neutron Activation and Thermoluminescent Detector Responses to a Bare Pulse of the CEA Valduc SILENE Critical Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Thomas Martin [ORNL; Isbell, Kimberly McMahan [ORNL; Lee, Yi-kang [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette; Gagnier, Emmanuel [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette; Authier, Nicolas [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille; Piot, Jerome [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille; Jacquet, Xavier [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille; Rousseau, Guillaume [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille; Reynolds, Kevin H. [Y-12 National Security Complex

    2016-09-01

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 11, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). The goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.

  4. Developing Evidence for Action on the Postgraduate Experience: An Effective Local Instrument to Move beyond Benchmarking

    Science.gov (United States)

    Sampson, K. A.; Johnston, L.; Comer, K.; Brogt, E.

    2016-01-01

    Summative and benchmarking surveys to measure the postgraduate student research experience are well reported in the literature. While useful, we argue that local instruments that provide formative resources with an academic development focus are also required. If higher education institutions are to move beyond the identification of issues and…

  5. Nuclear criticality information system

    International Nuclear Information System (INIS)

    Koponen, B.L.; Hampel, V.E.

    1981-01-01

    The nuclear criticality safety program at LLNL began in the 1950's with a critical measurements program which produced benchmark data until the late 1960's. This same time period saw the rapid development of computer technology useful for both computer modeling of fissile systems and for computer-aided management and display of the computational benchmark data. Database management grew in importance as the amount of information increased and as experimental programs were terminated. Within the criticality safety program at LLNL we began at that time to develop a computer library of benchmark data for validation of computer codes and cross sections. As part of this effort, we prepared a computer-based bibliography of criticality measurements on relatively simple systems. However, it is only now that some of these computer-based resources can be made available to the nuclear criticality safety community at large. This technology transfer is being accomplished by the DOE Technology Information System (TIS), a dedicated, advanced information system. The NCIS database is described

  6. An Overview of the International Reactor Physics Experiment Evaluation Project

    International Nuclear Information System (INIS)

    Briggs, J. Blair; Gulliford, Jim

    2014-01-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties associated with advanced modeling and simulation accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. Two Organization for Economic Cooperation and Development (OECD) Nuclear Energy Agency (NEA) activities, the International Criticality Safety Benchmark Evaluation Project (ICSBEP), initiated in 1992, and the International Reactor Physics Experiment Evaluation Project (IRPhEP), initiated in 2003, have been identifying existing integral experiment data, evaluating those data, and providing integral benchmark specifications for methods and data validation for nearly two decades. Data provided by those two projects will be of use to the international reactor physics, criticality safety, and nuclear data communities for future decades. An overview of the IRPhEP and a brief update of the ICSBEP are provided in this paper.

  7. 239Pu prompt fission neutron spectra impact on a set of criticality and experimental reactor benchmarks

    International Nuclear Information System (INIS)

    Peneliau, Y.; Litaize, O.; Archier, P.; De Saint Jean, C.

    2014-01-01

    A large set of nuclear data are investigated to improve the calculation predictions of the new neutron transport simulation codes. With the next generation of nuclear power plants (GEN IV projects), one expects to reduce the calculated uncertainties which are mainly coming from nuclear data and are still very important, before taking into account integral information in the adjustment process. In France, future nuclear power plant concepts will probably use MOX fuel, either in Sodium Fast Reactors or in Gas Cooled Fast Reactors. Consequently, the knowledge of 239 Pu cross sections and other nuclear data is crucial issue in order to reduce these sources of uncertainty. The Prompt Fission Neutron Spectra (PFNS) for 239 Pu are part of these relevant data (an IAEA working group is even dedicated to PFNS) and the work presented here deals with this particular topic. The main international data files (i.e. JEFF-3.1.1, ENDF/B-VII.0, JENDL-4.0, BRC-2009) have been considered and compared with two different spectra, coming from the works of Maslov and Kornilov respectively. The spectra are first compared by calculating their mathematical moments in order to characterize them. Then, a reference calculation using the whole JEFF-3.1.1 evaluation file is performed and compared with another calculation performed with a new evaluation file, in which the data block containing the fission spectra (MF=5, MT=18) is replaced by the investigated spectra (one for each evaluation). A set of benchmarks is used to analyze the effects of PFNS, covering criticality cases and mock-up cases in various neutron flux spectra (thermal, intermediate, and fast flux spectra). Data coming from many ICSBEP experiments are used (PU-SOL-THERM, PU-MET-FAST, PU-MET-INTER and PU-MET-MIXED) and French mock-up experiments are also investigated (EOLE for thermal neutron flux spectrum and MASURCA for fast neutron flux spectrum). This study shows that many experiments and neutron parameters are very sensitive to

  8. Critical and sub-critical experiments on U-BeO lattices

    International Nuclear Information System (INIS)

    Benoist, P.; Gourdon, Ch.; Martelly, J.; Sagot, M.; Wanner, G.

    1958-01-01

    Sub-critical experiments have allowed us to measure the material buckling of uranium natural oxide of beryllium lattices with a grid of 15 cm, and made up of uranium bars measuring 2.60 - 2.92 - 3.56 and 4.40 cm of diameter. A critical experiment has then been conducted with hollow 1.35 per cent enriched uranium bars. A study of U-BeO 18.03 cm grid lattices is at present being conducted. (author) [fr

  9. Preparation of data for criticality safety evaluation of nuclear fuel cycle facilities

    International Nuclear Information System (INIS)

    Okuno, Hiroshi; Suyama, Kenya; Yoshiyama, Hiroshi; Tonoike, Kotaro; Miyoshi, Yoshinori

    2005-01-01

    Nuclear Criticality Safety Handbook/Data Collection, Version 2 was submitted to the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan as a contract report. In this presentation paper, its outline and related recent works are presented. After an introduction in Chapter 1, useful information to obtain the atomic number densities was collected in Chapter 2. The nuclear characteristic parameters for 11 nuclear fuels were provided in Chapter 3, and subcriticality judgment graphs were given in Chapter 4. The estimated critical and estimated lower-limit critical values were supplied for the 11 nuclear fuels as results of calculations by using the Japanese Evaluated Nuclear Data Library, JENDL-3.2, and the continuous energy Monte Carlo neutron transport code MVP in Chapter 5. The results of benchmark calculations based on the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook were summarized into six fuel categories in Chapter 6. As for recent works, subcriticality judgment graphs for U-SiO 2 and Pu-SiO 2 were obtained. Benchmark calculations were made with the combination of the latest version of the library JENDL-3.3 and MVP code for a series of STACY experiments and the estimated critical and estimated lower-limit critical values of 10 wt%-enriched uranium nitrate solutions were calculated. (author)

  10. Critical experiments facility and criticality safety programs at JAERI

    International Nuclear Information System (INIS)

    Kobayashi, Iwao; Tachimori, Shoichi; Takeshita, Isao; Suzaki, Takenori; Miyoshi, Yoshinori; Nomura, Yasushi

    1985-10-01

    The nuclear criticality safety is becoming a key point in Japan in the safety considerations for nuclear installations outside reactors such as spent fuel reprocessing facilities, plutonium fuel fabrication facilities, large scale hot alboratories, and so on. Especially a large scale spent fuel reprocessing facility is being designed and would be constructed in near future, therefore extensive experimental studies are needed for compilation of our own technical standards and also for verification of safety in a potential criticality accident to obtain public acceptance. Japan Atomic Energy Research Institute is proceeding a construction program of a new criticality safety experimental facility where criticality data can be obtained for such solution fuels as mainly handled in a reprocessing facility and also chemical process experiments can be performed to investigate abnormal phenomena, e.g. plutonium behavior in solvent extraction process by using pulsed colums. In FY 1985 detail design of the facility will be completed and licensing review by the government would start in FY 1986. Experiments would start in FY 1990. Research subjects and main specifications of the facility are described. (author)

  11. Compilation of benchmark results for fusion related Nuclear Data

    International Nuclear Information System (INIS)

    Maekawa, Fujio; Wada, Masayuki; Oyama, Yukio; Ichihara, Chihiro; Makita, Yo; Takahashi, Akito

    1998-11-01

    This report compiles results of benchmark tests for validation of evaluated nuclear data to be used in nuclear designs of fusion reactors. Parts of results were obtained under activities of the Fusion Neutronics Integral Test Working Group organized by the members of both Japan Nuclear Data Committee and the Reactor Physics Committee. The following three benchmark experiments were employed used for the tests: (i) the leakage neutron spectrum measurement experiments from slab assemblies at the D-T neutron source at FNS/JAERI, (ii) in-situ neutron and gamma-ray measurement experiments (so-called clean benchmark experiments) also at FNS, and (iii) the pulsed sphere experiments for leakage neutron and gamma-ray spectra at the D-T neutron source facility of Osaka University, OKTAVIAN. Evaluated nuclear data tested were JENDL-3.2, JENDL Fusion File, FENDL/E-1.0 and newly selected data for FENDL/E-2.0. Comparisons of benchmark calculations with the experiments for twenty-one elements, i.e., Li, Be, C, N, O, F, Al, Si, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zr, Nb, Mo, W and Pb, are summarized. (author). 65 refs

  12. Deflection-based method for seismic response analysis of concrete walls: Benchmarking of CAMUS experiment

    International Nuclear Information System (INIS)

    Basu, Prabir C.; Roshan, A.D.

    2007-01-01

    A number of shake table tests had been conducted on the scaled down model of a concrete wall as part of CAMUS experiment. The experiments were conducted between 1996 and 1998 in the CEA facilities in Saclay, France. Benchmarking of CAMUS experiments was undertaken as a part of the coordinated research program on 'Safety Significance of Near-Field Earthquakes' organised by International Atomic Energy Agency (IAEA). Technique of deflection-based method was adopted for benchmarking exercise. Non-linear static procedure of deflection-based method has two basic steps: pushover analysis, and determination of target displacement or performance point. Pushover analysis is an analytical procedure to assess the capacity to withstand seismic loading effect that a structural system can offer considering the redundancies and inelastic deformation. Outcome of a pushover analysis is the plot of force-displacement (base shear-top/roof displacement) curve of the structure. This is obtained by step-by-step non-linear static analysis of the structure with increasing value of load. The second step is to determine target displacement, which is also known as performance point. The target displacement is the likely maximum displacement of the structure due to a specified seismic input motion. Established procedures, FEMA-273 and ATC-40, are available to determine this maximum deflection. The responses of CAMUS test specimen are determined by deflection-based method and analytically calculated values compare well with the test results

  13. Three-dimensional RAMA fluence methodology benchmarking

    International Nuclear Information System (INIS)

    Baker, S. P.; Carter, R. G.; Watkins, K. E.; Jones, D. B.

    2004-01-01

    This paper describes the benchmarking of the RAMA Fluence Methodology software, that has been performed in accordance with U. S. Nuclear Regulatory Commission Regulatory Guide 1.190. The RAMA Fluence Methodology has been developed by TransWare Enterprises Inc. through funding provided by the Electric Power Research Inst., Inc. (EPRI) and the Boiling Water Reactor Vessel and Internals Project (BWRVIP). The purpose of the software is to provide an accurate method for calculating neutron fluence in BWR pressure vessels and internal components. The Methodology incorporates a three-dimensional deterministic transport solution with flexible arbitrary geometry representation of reactor system components, previously available only with Monte Carlo solution techniques. Benchmarking was performed on measurements obtained from three standard benchmark problems which include the Pool Criticality Assembly (PCA), VENUS-3, and H. B. Robinson Unit 2 benchmarks, and on flux wire measurements obtained from two BWR nuclear plants. The calculated to measured (C/M) ratios range from 0.93 to 1.04 demonstrating the accuracy of the RAMA Fluence Methodology in predicting neutron flux, fluence, and dosimetry activation. (authors)

  14. Critical experiments with mixed oxide fuel

    International Nuclear Information System (INIS)

    Harris, D.R.

    1997-01-01

    This paper very briefly outlines technical considerations in performing critical experiments on weapons-grade plutonium mixed oxide fuel assemblies. The experiments proposed would use weapons-grade plutonium and Er 2 O 3 at various dissolved boron levels, and for specific fuel assemblies such as the ABBCE fuel assembly with five large water holes. Technical considerations described include the core, the measurements, safety, security, radiological matters, and licensing. It is concluded that the experiments are feasible at the Rensselaer Polytechnic Institute Reactor Critical Facility. 9 refs

  15. Benchmarking of the PHOENIX-P/ANC [Advanced Nodal Code] advanced nuclear design system

    International Nuclear Information System (INIS)

    Nguyen, T.Q.; Liu, Y.S.; Durston, C.; Casadei, A.L.

    1988-01-01

    At Westinghouse, an advanced neutronic methods program was designed to improve the quality of the predictions, enhance flexibility in designing advanced fuel and related products, and improve design lead time. Extensive benchmarking data is presented to demonstrate the accuracy of the Advanced Nodal Code (ANC) and the PHOENIX-P advanced lattice code. Qualification data to demonstrate the accuracy of ANC include comparison of key physics parameters against a fine-mesh diffusion theory code, TORTISE. Benchmarking data to demonstrate the validity of the PHOENIX-P methodologies include comparison of physics predictions against critical experiments, isotopics measurements and measured power distributions from spatial criticals. The accuracy of the PHOENIX-P/ANC Advanced Design System is demonstrated by comparing predictions of hot zero power physics parameters and hot full power core follow against measured data from operating reactors. The excellent performance of this system for a broad range of comparisons establishes the basis for implementation of these tools for core design, licensing and operational follow of PWR [pressurized water reactor] cores at Westinghouse

  16. A simplified 2D HTTR benchmark problem

    International Nuclear Information System (INIS)

    Zhang, Z.; Rahnema, F.; Pounders, J. M.; Zhang, D.; Ougouag, A.

    2009-01-01

    To access the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of relevant whole core configurations. In this paper we have created a numerical benchmark problem in 2D configuration typical of a high temperature gas cooled prismatic core. This problem was derived from the HTTR start-up experiment. For code-to-code verification, complex details of geometry and material specification of the physical experiments are not necessary. To this end, the benchmark problem presented here is derived by simplifications that remove the unnecessary details while retaining the heterogeneity and major physics properties from the neutronics viewpoint. Also included here is a six-group material (macroscopic) cross section library for the benchmark problem. This library was generated using the lattice depletion code HELIOS. Using this library, benchmark quality Monte Carlo solutions are provided for three different configurations (all-rods-in, partially-controlled and all-rods-out). The reference solutions include the core eigenvalue, block (assembly) averaged fuel pin fission density distributions, and absorption rate in absorbers (burnable poison and control rods). (authors)

  17. Simulation benchmark based on THAI-experiment on dissolution of a steam stratification by natural convection

    Energy Technology Data Exchange (ETDEWEB)

    Freitag, M., E-mail: freitag@becker-technologies.com; Schmidt, E.; Gupta, S.; Poss, G.

    2016-04-01

    Highlights: . • We studied the generation and dissolution of steam stratification in natural convection. • We performed a computer code benchmark including blind and open phases. • The dissolution of stratification predicted only qualitatively by LP and CFD models during the blind simulation phase. - Abstract: Locally enriched hydrogen as in stratification may contribute to early containment failure in the course of severe nuclear reactor accidents. During accident sequences steam might accumulate as well to stratifications which can directly influence the distribution and ignitability of hydrogen mixtures in containments. An international code benchmark including Computational Fluid Dynamics (CFD) and Lumped Parameter (LP) codes was conducted in the frame of the German THAI program. Basis for the benchmark was experiment TH24.3 which investigates the dissolution of a steam layer subject to natural convection in the steam-air atmosphere of the THAI vessel. The test provides validation data for the development of CFD and LP models to simulate the atmosphere in the containment of a nuclear reactor installation. In test TH24.3 saturated steam is injected into the upper third of the vessel forming a stratification layer which is then mixed by a superposed thermal convection. In this paper the simulation benchmark will be evaluated in addition to the general discussion about the experimental transient of test TH24.3. Concerning the steam stratification build-up and dilution of the stratification, the numerical programs showed very different results during the blind evaluation phase, but improved noticeable during open simulation phase.

  18. Comparisons of the MCNP criticality benchmark suite with ENDF/B-VI.8, JENDL-3.3, and JEFF-3.0

    International Nuclear Information System (INIS)

    Kim, Do Heon; Gil, Choong-Sup; Kim, Jung-Do; Chang, Jonghwa

    2003-01-01

    A comparative study has been performed with the latest evaluated nuclear data libraries ENDF/B-VI.8, JENDL-3.3, and JEFF-3.0. The study has been conducted through the benchmark calculations for 91 criticality problems with the libraries processed for MCNP4C. The calculation results have been compared with those of the ENDF60 library. The self-shielding effects of the unresolved-resonance (UR) probability tables have also been estimated for each library. The χ 2 differences between the MCNP results and experimental data were calculated for the libraries. (author)

  19. Statistical benchmarking in utility regulation: Role, standards and methods

    International Nuclear Information System (INIS)

    Newton Lowry, Mark; Getachew, Lullit

    2009-01-01

    Statistical benchmarking is being used with increasing frequency around the world in utility rate regulation. We discuss how and where benchmarking is in use for this purpose and the pros and cons of regulatory benchmarking. We then discuss alternative performance standards and benchmarking methods in regulatory applications. We use these to propose guidelines for the appropriate use of benchmarking in the rate setting process. The standards, which we term the competitive market and frontier paradigms, have a bearing on method selection. These along with regulatory experience suggest that benchmarking can either be used for prudence review in regulation or to establish rates or rate setting mechanisms directly

  20. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  1. The development of code benchmarks

    International Nuclear Information System (INIS)

    Glass, R.E.

    1986-01-01

    Sandia National Laboratories has undertaken a code benchmarking effort to define a series of cask-like problems having both numerical solutions and experimental data. The development of the benchmarks includes: (1) model problem definition, (2) code intercomparison, and (3) experimental verification. The first two steps are complete and a series of experiments are planned. The experiments will examine the elastic/plastic behavior of cylinders for both the end and side impacts resulting from a nine meter drop. The cylinders will be made from stainless steel and aluminum to give a range of plastic deformations. This paper presents the results of analyses simulating the model's behavior using materials properties for stainless steel and aluminum

  2. ORSPHERE: CRITICAL, BARE, HEU(93.2)-METAL SPHERE

    Energy Technology Data Exchange (ETDEWEB)

    Margaret A. Marshall

    2013-09-01

    In the early 1970’s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an attempt to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950’s (HEU-MET-FAST-001). The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with the GODIVA I experiments. “The very accurate description of this sphere, as assembled, establishes it as an ideal benchmark for calculational methods and cross-section data files.” (Reference 1) While performing the ORSphere experiments care was taken to accurately document component dimensions (±0. 0001 in. for non-spherical parts), masses (±0.01 g), and material data The experiment was also set up to minimize the amount of structural material in the sphere proximity. A three part sphere was initially assembled with an average radius of 3.4665 in. and was then machined down to an average radius of 3.4420 in. (3.4425 in. nominal). These two spherical configurations were evaluated and judged to be acceptable benchmark experiments; however, the two experiments are highly correlated.

  3. Critical experiments supporting close proximity water storage of power reactor fuel. Technical progress report

    International Nuclear Information System (INIS)

    Baldwin, M.N.; Hoovler, G.S.; Eng, R.L.; Welfare, F.G.

    1979-07-01

    Close-packed storage of LWR fuel assemblies is needed in order to expand the capacity of existing underwater storage pools. This increased capacity is required to accommodate the large volume of spent fuel produced by prolonged onsite storage. To provide benchmark criticality data in support of this effort, 20 critical assemblies were constructed that simulated a variety of close-packed LWR fuel storage configurations. Criticality calculations using the Monte Carlo KENO-IV code were performed to provide an analytical basis for comparison with the experimental data. Each critical configuration is documented in sufficient detail to permit the use of these data in validating calculational methods according to ANSI Standard N16.9-1975

  4. Development and validation of a criticality calculation scheme based on French deterministic transport codes

    International Nuclear Information System (INIS)

    Santamarina, A.

    1991-01-01

    A criticality-safety calculational scheme using the automated deterministic code system, APOLLO-BISTRO, has been developed. The cell/assembly code APOLLO is used mainly in LWR and HCR design calculations, and its validation spans a wide range of moderation ratios, including voided configurations. Its recent 99-group library and self-shielded cross-sections has been extensively qualified through critical experiments and PWR spent fuel analysis. The PIC self-shielding formalism enables a rigorous treatment of the fuel double heterogeneity in dissolver medium calculations. BISTRO is an optimized multidimensional SN code, part of the modular CCRR package used mainly in FBR calculations. The APOLLO-BISTRO scheme was applied to the 18 experimental benchmarks selected by the OECD/NEACRP Criticality Calculation Working Group. The Calculation-Experiment discrepancy was within ± 1% in ΔK/K and always looked consistent with the experimental uncertainty margin. In the critical experiments corresponding to a dissolver type benchmark, our tools computed a satisfactory Keff. In the VALDUC fuel storage experiments, with hafnium plates, the computed Keff ranged between 0.994 and 1.003 for the various watergaps spacing the fuel clusters from the absorber plates. The APOLLO-KENOEUR statistic calculational scheme, based on the same self-shielded multigroup library, supplied consistent results within 0.3% in ΔK/K. (Author)

  5. BFS, a Legacy to the International Reactor Physics, Criticality Safety, and Nuclear Data Communities

    International Nuclear Information System (INIS)

    Briggs, J. Blair; Tsibulya, Anatoly; Rozhikhin, Yevgeniy

    2012-01-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. Two Organization for Economic Cooperation and Development (OECD) Nuclear Energy Agency (NEA) activities, the International Criticality Safety Benchmark Evaluation Project (ICSBEP), initiated in 1992, and the International Reactor Physics Experiment Evaluation Project (IRPhEP), initiated in 2003, have been identifying existing integral experiment data, evaluating those data, and providing integral benchmark specifications for methods and data validation for nearly two decades. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. Data provided by these two projects will be of use to the international reactor physics, criticality safety, and nuclear data communities for future decades The Russian Federation has been a major contributor to both projects with the Institute of Physics and Power Engineering (IPPE) as the major contributor from the Russian Federation. Included in the benchmark specifications from the BFS facilities are 34 critical configurations from BFS-49, 61, 62, 73, 79, 81, 97, 99, and 101; spectral characteristics measurements from BFS-31, 42, 57, 59, 61, 62, 73, 97, 99, and 101; reactivity effects measurements from BFS-62-3A; reactivity coefficients and kinetics measurements from BFS-73; and reaction rate measurements from BFS-42, 61, 62, 73, 97, 99, and 101.

  6. Benchmark physics tests in the metallic-fueled assembly ZPPR-15

    International Nuclear Information System (INIS)

    McFarlane, H.F.; Brumbach, S.B.; Carpenter, S.G.; Collins, P.J.

    1989-01-01

    Results of the first benchmark physics tests of a metallic-fueled, demonstration-size liquid-metal reactor (LMR) are reported. A simple, two-zone, cylindrical conventional assembly was built with three distinctly different compositions to represent the stages of the Integral Fast Reactor fuel cycle. Experiments included criticality, control, power distribution, reaction rate ratios, reactivity coefficients, shielding, kinetics, and spectrum. Analysis was done with three-dimensional nodal diffusion calculations and ENDF/B-V.2 cross sections. Predictions of the ZPPR-15 reactor physics parameters agreed sufficiently well with the measured values to justify confidence in design analyses for metallic-fueled LMRs

  7. Benchmark physics tests in the metallic-fuelled assembly ZPPR-15

    International Nuclear Information System (INIS)

    McFarlane, H.F.; Brumbach, S.B.; Carpenter, S.G.; Collins, P.J.

    1987-01-01

    Results of the first benchmark physics tests of a metallic-fueled, demonstration-size, liquid metal reactor are reported. A simple, two-zone, cylindrical conventional assembly was built with three distinctly different compositions to represent the stages of the Integral Fast Reactor fuel cycle. Experiments included criticality, control, power distribution, reaction rate ratios, reactivity coefficients, shielding, kinetics and spectrum. Analysis was done with 3-D nodal diffusion calculations and ENDFIB-V.2 cross sections. Predictions of the ZPPR-15 reactor physics parameters agreed sufficiently well with the measured values to justify confidence in design analyses for metallic-fueled LMRs

  8. Consistency check of iron and sodium cross sections with integral benchmark experiments using a large amount of experimental information

    International Nuclear Information System (INIS)

    Baechle, R.-D.; Hehn, G.; Pfister, G.; Perlini, G.; Matthes, W.

    1984-01-01

    Single material benchmark experiments are designed to check neutron and gamma cross-sections of importance for deep penetration problems. At various penetration depths a large number of activation detectors and spectrometers are placed to measure the radiation field as completely as possible. The large amount of measured data in benchmark experiments can be evaluated best by the global detector concept applied to nuclear data adjustment. A new iteration procedure is presented for adjustment of a large number of multigroup cross sections, which has been implemented now in the modular adjustment code ADJUST-EUR. A theoretical test problem has been deviced to check the total program system with high precision. The method and code are going to be applied for validating the new European Data Files (JEF and EFF) in progress. (Auth.)

  9. Three-dimensional coupled kinetics/thermal- hydraulic benchmark TRIGA experiments

    International Nuclear Information System (INIS)

    Feltus, Madeline Anne; Miller, William Scott

    2000-01-01

    This research project provides separate effects tests in order to benchmark neutron kinetics models coupled with thermal-hydraulic (T/H) models used in best-estimate codes such as the Nuclear Regulatory Commission's (NRC) RELAP and TRAC code series and industrial codes such as RETRAN. Before this research project was initiated, no adequate experimental data existed for reactivity initiated transients that could be used to assess coupled three-dimensional (3D) kinetics and 3D T/H codes which have been, or are being developed around the world. Using various Test Reactor Isotope General Atomic (TRIGA) reactor core configurations at the Penn State Breazeale Reactor (PSBR), it is possible to determine the level of neutronics modeling required to describe kinetics and T/H feedback interactions. This research demonstrates that the small compact PSBR TRIGA core does not necessarily behave as a point kinetics reactor, but that this TRIGA can provide actual test results for 3D kinetics code benchmark efforts. This research focused on developing in-reactor tests that exhibited 3D neutronics effects coupled with 3D T/H feedback. A variety of pulses were used to evaluate the level of kinetics modeling needed for prompt temperature feedback in the fuel. Ramps and square waves were used to evaluate the detail of modeling needed for the delayed T/H feedback of the coolant. A stepped ramp was performed to evaluate and verify the derived thermal constants for the specific PSBR TRIGA core loading pattern. As part of the analytical benchmark research, the STAR 3D kinetics code (, STAR: Space and time analysis of reactors, Version 5, Level 3, Users Guide, Yankee Atomic Electric Company, YEAC 1758, Bolton, MA) was used to model the transient experiments. The STAR models were coupled with the one-dimensional (1D) WIGL and LRA and 3D COBRA (, COBRA IIIC: A digital computer program for steady-state and transient thermal-hydraulic analysis of rod bundle nuclear fuel elements, Battelle

  10. Measuring Distribution Performance? Benchmarking Warrants Your Attention

    Energy Technology Data Exchange (ETDEWEB)

    Ericson, Sean J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Alvarez, Paul [The Wired Group

    2018-04-13

    Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.

  11. Jendl-3.1 iron validation on the PCA-REPLICA (H2O/Fe) shielding benchmark experiment

    International Nuclear Information System (INIS)

    Pescarini, M.; Borgia, M. G.

    1997-03-01

    The PCA-REPLICA (H 2 O/Fe) neutron shielding benchmarks experiment is analysed using the SN 2-D DOT 3.5-E code and the 3-D-equivalent flux synthesis method. This engineering benchmark reproduces the ex-core radial geometry of a PWR, including a mild steel reactor pressure vessel (RPV) simulator, and is designed to test the accuracy of the calculation of the in-vessel neutron exposure parameters. This accuracy is strongly dependent on the quality of the iron neutron cross sections used to describe the nuclear reactions within the RPV simulator. In particular, in this report, the cross sections based on the JENDL-3.1 iron data files are tested, through a comparison of the calculated integral and spectral results with the corresponding experimental data. In addition, the present results are compared, on the same benchmark experiment, with those of a preceding ENEA-Bologna validation of the ENDF/B VI iron cross sections. The integral result comparison indicates that, for all the threshold detectors considered (Rh-103 (n, n') Rh-103m, In-115 (n, n') In-115m and S-32 (n, p) P-32), the JENDL-3.1 natural iron data produce satisfactory results similar to those obtained with the ENDF/B VI iron data. On the contrary, when the JENDL/3.1 Fe-56 data file is used, strongly underestimated results are obtained for the lower energy threshold detectors, Rh-103 and In-115. This fact, in particular, becomes more evident with increasing the neutron penetration depth in the RPV simulator

  12. ZZ ECN-BUBEBO, ECN-Petten Burnup Benchmark Book, Inventories, Afterheat

    International Nuclear Information System (INIS)

    Kloosterman, Jan Leen

    1999-01-01

    Description of program or function: Contains experimental benchmarks which can be used for the validation of burnup code systems and accompanied data libraries. Although the benchmarks presented here are thoroughly described in literature, it is in many cases not straightforward to retrieve unambiguously the correct input data and corresponding results from the benchmark Descriptions. Furthermore, results which can easily be measured, are sometimes difficult to calculate because of conversions to be made. Therefore, emphasis has been put to clarify the input of the benchmarks and to present the benchmark results in such a way that they can easily be calculated and compared. For more thorough Descriptions of the benchmarks themselves, the literature referred to here should be consulted. This benchmark book is divided in 11 chapters/files containing the following in text and tabular form: chapter 1: Introduction; chapter 2: Burnup Credit Criticality Benchmark Phase 1-B; chapter 3: Yankee-Rowe Core V Fuel Inventory Study; chapter 4: H.B. Robinson Unit 2 Fuel Inventory Study; chapter 5: Turkey Point Unit 3 Fuel Inventory Study; chapter 6: Turkey Point Unit 3 Afterheat Power Study; chapter 7: Dickens Benchmark on Fission Product Energy Release of U-235; chapter 8: Dickens Benchmark on Fission Product Energy Release of Pu-239; chapter 9: Yarnell Benchmark on Decay Heat Measurements of U-233; chapter 10: Yarnell Benchmark on Decay Heat Measurements of U-235; chapter 11: Yarnell Benchmark on Decay Heat Measurements of Pu-239

  13. Benchmarks and Quality Assurance for Online Course Development in Higher Education

    Science.gov (United States)

    Wang, Hong

    2008-01-01

    As online education has entered the main stream of the U.S. higher education, quality assurance in online course development has become a critical topic in distance education. This short article summarizes the major benchmarks related to online course development, listing and comparing the benchmarks of the National Education Association (NEA),…

  14. Validation of the WIMSD4M cross-section generation code with benchmark results

    International Nuclear Information System (INIS)

    Deen, J.R.; Woodruff, W.L.; Leal, L.E.

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section libraries for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D 2 O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented

  15. Validation of the WIMSD4M cross-section generation code with benchmark results

    Energy Technology Data Exchange (ETDEWEB)

    Deen, J.R.; Woodruff, W.L. [Argonne National Lab., IL (United States); Leal, L.E. [Oak Ridge National Lab., TN (United States)

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section libraries for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.

  16. Testing of cross section libraries for TRIGA criticality benchmark

    International Nuclear Information System (INIS)

    Snoj, L.; Trkov, A.; Ravnik, M.

    2007-01-01

    Influence of various up-to-date cross section libraries on the multiplication factor of TRIGA benchmark as well as the influence of fuel composition on the multiplication factor of the system composed of various types of TRIGA fuel elements was investigated. It was observed that keff calculated by using the ENDF/B VII cross section library is systematically higher than using the ENDF/B-VI cross section library. The main contributions (∼ 2 20 pcm) are from 235 U and Zr. (author)

  17. Measurement, Standards, and Peer Benchmarking: One Hospital's Journey.

    Science.gov (United States)

    Martin, Brian S; Arbore, Mark

    2016-04-01

    Peer-to-peer benchmarking is an important component of rapid-cycle performance improvement in patient safety and quality-improvement efforts. Institutions should carefully examine critical success factors before engagement in peer-to-peer benchmarking in order to maximize growth and change opportunities. Solutions for Patient Safety has proven to be a high-yield engagement for Children's Hospital of Pittsburgh of University of Pittsburgh Medical Center, with measureable improvement in both organizational process and culture. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Monte Carlo burnup simulation of the TAKAHAMA-3 benchmark experiment

    International Nuclear Information System (INIS)

    Dalle, Hugo M.

    2009-01-01

    High burnup PWR fuel is currently being studied at CDTN/CNEN-MG. Monte Carlo burnup code system MONTEBURNS is used to characterize the neutronic behavior of the fuel. In order to validate the code system and calculation methodology to be used in this study the Japanese Takahama-3 Benchmark was chosen, as it is the single burnup benchmark experimental data set freely available that partially reproduces the conditions of the fuel under evaluation. The burnup of the three PWR fuel rods of the Takahama-3 burnup benchmark was calculated by MONTEBURNS using the simplest infinite fuel pin cell model and also a more complex representation of an infinite heterogeneous fuel pin cells lattice. Calculations results for the mass of most isotopes of Uranium, Neptunium, Plutonium, Americium, Curium and some fission products, commonly used as burnup monitors, were compared with the Post Irradiation Examinations (PIE) values for all the three fuel rods. Results have shown some sensitivity to the MCNP neutron cross-section data libraries, particularly affected by the temperature in which the evaluated nuclear data files were processed. (author)

  19. How to achieve and prove performance improvement - 15 years of experience in German wastewater benchmarking.

    Science.gov (United States)

    Bertzbach, F; Franz, T; Möller, K

    2012-01-01

    This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.

  20. Framework for benchmarking online retailing performance using fuzzy AHP and TOPSIS method

    Directory of Open Access Journals (Sweden)

    M. Ahsan Akhtar Hasin

    2012-08-01

    Full Text Available Due to increasing penetration of internet connectivity, on-line retail is growing from the pioneer phase to increasing integration within people's lives and companies' normal business practices. In the increasingly competitive environment, on-line retail service providers require systematic and structured approach to have cutting edge over the rival. Thus, the use of benchmarking has become indispensable to accomplish superior performance to support the on-line retail service providers. This paper uses the fuzzy analytic hierarchy process (FAHP approach to support a generic on-line retail benchmarking process. Critical success factors for on-line retail service have been identified from a structured questionnaire and literature and prioritized using fuzzy AHP. Using these critical success factors, performance levels of the ORENET an on-line retail service provider is benchmarked along with four other on-line service providers using TOPSIS method. Based on the benchmark, their relative ranking has also been illustrated.

  1. Copper benchmark experiment at the Frascati Neutron Generator for nuclear data validation

    Energy Technology Data Exchange (ETDEWEB)

    Angelone, M., E-mail: maurizio.angelone@enea.it; Flammini, D.; Loreti, S.; Moro, F.; Pillon, M.; Villari, R.

    2016-11-01

    Highlights: • A benchmark experiment was performed using pure copper with 14 MeV neutrons. • The experiment was performed at the Frascati Neutron Generator (FNG). • Activation foils, thermoluminescent dosimeters and scintillators were used to measure reactions rates (RR), nuclear heating and neutron spectra. • The paper presents the RR measurements and the post analysis using MCNP5 and JEFF-3.1.1, JEFF-3.2 and FENDL-3.1 libraries. • C/Es are presented showing the need for deep revision of Cu cross sections. - Abstract: A neutronics benchmark experiment on a pure Copper block (dimensions 60 × 70 × 60 cm{sup 3}), aimed at testing and validating the recent nuclear data libraries for fusion applications, was performed at the 14-MeV Frascati Neutron Generator (FNG) as part of a F4E specific grant (F4E-FPA-395-01) assigned to the European Consortium on Nuclear Data and Experimental Techniques. The relevant neutronics quantities (e.g., reaction rates, neutron flux spectra, doses, etc.) were measured using different experimental techniques and the results were compared to the calculated quantities using fusion relevant nuclear data libraries. This paper focuses on the analyses carried-out by ENEA through the activation foils techniques. {sup 197}Au(n,γ){sup 198}Au, {sup 186}W(n,γ){sup 187}W, {sup 115}In(n,n′){sup 115}In, {sup 58}Ni(n,p){sup 58}Co, {sup 27}Al(n,α){sup 24}Na, {sup 93}Nb(n,2n){sup 92}Nb{sup m} activation reactions were used. The foils were placed at eight different positions along the Cu block and irradiated with 14 MeV neutrons. Activation measurements were performed by means of High Purity Germanium (HPGe) detector. Detailed simulation of the experiment was carried-out using MCNP5 Monte Carlo code and the European JEFF-3.1.1 and 3.2 nuclear cross-sections data files for neutron transport and IRDFF-v1.05 library for the reaction rates in activation foils. The calculated reaction rates (C) were compared to the experimental quantities (E) and

  2. Benchmark calculation of nuclear design code for HCLWR

    International Nuclear Information System (INIS)

    Suzuki, Katsuo; Saji, Etsuro; Gakuhari, Kazuhiko; Akie, Hiroshi; Takano, Hideki; Ishiguro, Yukio.

    1986-01-01

    In the calculation of the lattice cell for High Conversion Light Water Reactors, big differences of nuclear design parameters appear between the results obtained by various methods and nuclear data libraries. The validity of the calculation can be verified by the critical experiment. The benchmark calculation is also efficient for the estimation of the validity in wide range of lattice parameters and burnup. As we do not have many measured data. The benchmark calculations were done by JAERI and MAPI, using SRAC and WIMS-E respectively. The problem covered the wide range of lattice parameters, i.e., from tight lattice to the current PWR lattice. The comparison was made on the effective multiplication factor, conversion ratio, and reaction rate of each nuclide, including burnup and void effects. The difference of the result is largest at the tightest lattice. But even at that lattice, the difference of the effective multiplication factor is only 1.4 %. The main cause of the difference is the neutron absorption rate U-238 in resonance energy region. The difference of other nuclear design parameters and their cause were also grasped. (author)

  3. The CMSSW benchmarking suite: Using HEP code to measure CPU performance

    International Nuclear Information System (INIS)

    Benelli, G

    2010-01-01

    The demanding computing needs of the CMS experiment require thoughtful planning and management of its computing infrastructure. A key factor in this process is the use of realistic benchmarks when assessing the computing power of the different architectures available. In recent years a discrepancy has been observed between the CPU performance estimates given by the reference benchmark for HEP computing (SPECint) and actual performances of HEP code. Making use of the CPU performance tools from the CMSSW performance suite, comparative CPU performance studies have been carried out on several architectures. A benchmarking suite has been developed and integrated in the CMSSW framework, to allow computing centers and interested third parties to benchmark architectures directly with CMSSW. The CMSSW benchmarking suite can be used out of the box, to test and compare several machines in terms of CPU performance and report with the wanted level of detail the different benchmarking scores (e.g. by processing step) and results. In this talk we describe briefly the CMSSW software performance suite, and in detail the CMSSW benchmarking suite client/server design, the performance data analysis and the available CMSSW benchmark scores. The experience in the use of HEP code for benchmarking will be discussed and CMSSW benchmark results presented.

  4. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  5. Performance analysis of fusion nuclear-data benchmark experiments for light to heavy materials in MeV energy region with a neutron spectrum shifter

    International Nuclear Information System (INIS)

    Murata, Isao; Ohta, Masayuki; Miyamaru, Hiroyuki; Kondo, Keitaro; Yoshida, Shigeo; Iida, Toshiyuki; Ochiai, Kentaro; Konno, Chikara

    2011-01-01

    Nuclear data are indispensable for development of fusion reactor candidate materials. However, benchmarking of the nuclear data in MeV energy region is not yet adequate. In the present study, benchmark performance in the MeV energy region was investigated theoretically for experiments by using a 14 MeV neutron source. We carried out a systematical analysis for light to heavy materials. As a result, the benchmark performance for the neutron spectrum was confirmed to be acceptable, while for gamma-rays it was not sufficiently accurate. Consequently, a spectrum shifter has to be applied. Beryllium had the best performance as a shifter. Moreover, a preliminary examination of whether it is really acceptable that only the spectrum before the last collision is considered in the benchmark performance analysis. It was pointed out that not only the last collision but also earlier collisions should be considered equally in the benchmark performance analysis.

  6. Generation of integral experiment covariance data and their impact on criticality safety validation

    Energy Technology Data Exchange (ETDEWEB)

    Stuke, Maik; Peters, Elisabeth; Sommer, Fabian

    2016-11-15

    The quantification of statistical dependencies in data of critical experiments and how to account for them properly in validation procedures has been discussed in the literature by various groups. However, these subjects are still an active topic in the Expert Group on Uncertainty Analysis for Criticality Safety Assessment (UACSA) of the OECDNEA Nuclear Science Committee. The latter compiles and publishes the freely available experimental data collection, the International Handbook of Evaluated Criticality Safety Benchmark Experiments, ICSBEP. Most of the experiments were performed as series and share parts of experimental setups, consequently leading to correlation effects in the results. The correct consideration of correlated data seems to be inevitable if the experimental data in a validation procedure is limited or one cannot rely on a sufficient number of uncorrelated data sets, e.g. from different laboratories using different setups. The general determination of correlations and the underlying covariance data as well as the consideration of them in a validation procedure is the focus of the following work. We discuss and demonstrate possible effects on calculated k{sub eff}'s, their uncertainties, and the corresponding covariance matrices due to interpretation of evaluated experimental data and its translation into calculation models. The work shows effects of various modeling approaches, varying distribution functions of parameters and compares and discusses results from the applied Monte-Carlo sampling method with available data on correlations. Our findings indicate that for the reliable determination of integral experimental covariance matrices or the correlation coefficients a detailed study of the underlying experimental data, the modeling approach and assumptions made, and the resulting sensitivity analysis seems to be inevitable. Further, a Bayesian method is discussed to include integral experimental covariance data when estimating an

  7. Generation of integral experiment covariance data and their impact on criticality safety validation

    International Nuclear Information System (INIS)

    Stuke, Maik; Peters, Elisabeth; Sommer, Fabian

    2016-11-01

    The quantification of statistical dependencies in data of critical experiments and how to account for them properly in validation procedures has been discussed in the literature by various groups. However, these subjects are still an active topic in the Expert Group on Uncertainty Analysis for Criticality Safety Assessment (UACSA) of the OECDNEA Nuclear Science Committee. The latter compiles and publishes the freely available experimental data collection, the International Handbook of Evaluated Criticality Safety Benchmark Experiments, ICSBEP. Most of the experiments were performed as series and share parts of experimental setups, consequently leading to correlation effects in the results. The correct consideration of correlated data seems to be inevitable if the experimental data in a validation procedure is limited or one cannot rely on a sufficient number of uncorrelated data sets, e.g. from different laboratories using different setups. The general determination of correlations and the underlying covariance data as well as the consideration of them in a validation procedure is the focus of the following work. We discuss and demonstrate possible effects on calculated k eff 's, their uncertainties, and the corresponding covariance matrices due to interpretation of evaluated experimental data and its translation into calculation models. The work shows effects of various modeling approaches, varying distribution functions of parameters and compares and discusses results from the applied Monte-Carlo sampling method with available data on correlations. Our findings indicate that for the reliable determination of integral experimental covariance matrices or the correlation coefficients a detailed study of the underlying experimental data, the modeling approach and assumptions made, and the resulting sensitivity analysis seems to be inevitable. Further, a Bayesian method is discussed to include integral experimental covariance data when estimating an application

  8. Benchmark matrix and guide: Part II.

    Science.gov (United States)

    1991-01-01

    In the last issue of the Journal of Quality Assurance (September/October 1991, Volume 13, Number 5, pp. 14-19), the benchmark matrix developed by Headquarters Air Force Logistics Command was published. Five horizontal levels on the matrix delineate progress in TQM: business as usual, initiation, implementation, expansion, and integration. The six vertical categories that are critical to the success of TQM are leadership, structure, training, recognition, process improvement, and customer focus. In this issue, "Benchmark Matrix and Guide: Part II" will show specifically how to apply the categories of leadership, structure, and training to the benchmark matrix progress levels. At the intersection of each category and level, specific behavior objectives are listed with supporting behaviors and guidelines. Some categories will have objectives that are relatively easy to accomplish, allowing quick progress from one level to the next. Other categories will take considerable time and effort to complete. In the next issue, Part III of this series will focus on recognition, process improvement, and customer focus.

  9. ENDF/B VI iron validation onpca-replica (H2O/FE) shielding benchmark experiment

    Energy Technology Data Exchange (ETDEWEB)

    Pescarini, M. [ENEA, Bologna (Italy). Centro Ricerche Energia `E. Clementel` - Area Energia e Innovazione

    1994-05-01

    The PCA-REPLICA (H2O/Fe) neutron shielding benchmark experiment is analysed using the SN 2-D DOT 3.5 code and the 3-D-equivalent flux synthesis method. This engineering benchmark reproduces the ex-core radial geometry of a PWR, including a mild steel reactor pressure vessel (RPV) simulator, and is dsigned to test the accuracy of the calculation of the in-vessel neutron exposure parameters (fast fluence and iron displacement rates). This accuracy is strongly dependent on the quality of the iron neutron cross section used to describe the nuclear reactions within the RPV simulator. In particular, in this report, the cross sections based on the ENDF/B VI iron data files are tested, through a comparison of the calculated integral and spectral results with the corresponding experimental data. In addition, the present results are compared, on the same benchmark experiment, with those of a preceding ENEA (Italian Agency for Energy, New Technologies and Environment)-Bologna validation of the JEF-2.1 iron cross sections. The integral result comparison indicates that, for all the thresold detectors considered (Rh-103 (n,n) Rh-103m, In-115 (n,n) In-115 (n,n) In-115m and S-32 (n.p) P-32), the ENDF/B VI iron data produce better results than the JEF-2.1 iron data. In particular, in the ENDF/B VI calcultaions, an improvement of the in-vessel C/E (Calculated/Experimental) activity ratios for the lower energy threshold detectors, Rh-103 and In-115, is observed. This improvement becomes more evident with increasing neutron penetration depth in the vessel. This is probably attributable to the fact that the inelastic scattering cross section values of the ENDF/B VI Fe-56 data file, approximately in the 0.86 - 1.5 MeV energy range, are lower then the corresponding values of the JEF-2.1 data file.

  10. The nuclear criticality information system's project to archive unpublished critical experiment data

    International Nuclear Information System (INIS)

    Koponen, B.L.; Doherty, A.L.; Clayton, E.D.

    1991-01-01

    Critical experiment facilities produced a large amount of important data during the past forty-five years. However, much useful data remains unpublished. The unpublished material exists in the form of experimenters' logbooks, notes, photographs, material descriptions, etc. These data could be important for computer code validation, understanding the physics of criticality, facility design, or for setting process limits. In the past, criticality specialists have been able to obtain unpublished details by direct contact with the experimenters. The closure of facilities and the loss of personnel is likely to lead to the loss of the facility records unless an effort is made to ensure that the records are preserved. It has been recognized for some time that the unpublished records of critical experiment facilities comprise a valuable resource, thus the Nuclear Criticality Information System (NCIS) is working to ensure that the records are preserved and made available via NCIS. As a first step in the archiving project, we identified criteria to help judge which series of experiments should be considered for archiving. Data that are used for validating calculations or the basis for subcritical limits in standards, handbooks, and guides are of particular importance. In this paper we will discuss the criteria for archiving, the priority list of experiments for archiving, and progress in developing an NCIS image database using current CD-ROM technology. (Author)

  11. Benchmark Analysis of Subcritical Noise Measurements on a Nickel-Reflected Plutonium Metal Sphere

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess; Jesson Hutchinson

    2009-09-01

    Subcritical experiments using californium source-driven noise analysis (CSDNA) and Feynman variance-to-mean methods were performed with an alpha-phase plutonium sphere reflected by nickel shells, up to a maximum thickness of 7.62 cm. Both methods provide means of determining the subcritical multiplication of a system containing nuclear material. A benchmark analysis of the experiments was performed for inclusion in the 2010 edition of the International Handbook of Evaluated Criticality Safety Benchmark Experiments. Benchmark models have been developed that represent these subcritical experiments. An analysis of the computed eigenvalues and the uncertainty in the experiment and methods was performed. The eigenvalues computed using the CSDNA method were very close to those calculated using MCNP5; however, computed eigenvalues are used in the analysis of the CSDNA method. Independent calculations using KENO-VI provided similar eigenvalues to those determined using the CSDNA method and MCNP5. A slight trend with increasing nickel-reflector thickness was seen when comparing MCNP5 and KENO-VI results. For the 1.27-cm-thick configuration the MCNP eigenvalue was approximately 300 pcm greater. The calculated KENO eigenvalue was about 300 pcm greater for the 7.62-cm-thick configuration. The calculated results were approximately the same for a 5-cm-thick shell. The eigenvalues determined using the Feynman method are up to approximately 2.5% lower than those determined using either the CSDNA method or the Monte Carlo codes. The uncertainty in the results from either method was not large enough to account for the bias between the two experimental methods. An ongoing investigation is being performed to assess what potential uncertainties and/or biases exist that have yet to be properly accounted for. The dominant uncertainty in the CSDNA analysis was the uncertainty in selecting a neutron cross-section library for performing the analysis of the data. The uncertainty in the

  12. Analytical Radiation Transport Benchmarks for The Next Century

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    2005-01-01

    Verification of large-scale computational algorithms used in nuclear engineering and radiological applications is an essential element of reliable code performance. For this reason, the development of a suite of multidimensional semi-analytical benchmarks has been undertaken to provide independent verification of proper operation of codes dealing with the transport of neutral particles. The benchmarks considered cover several one-dimensional, multidimensional, monoenergetic and multigroup, fixed source and critical transport scenarios. The first approach, called the Green's Function. In slab geometry, the Green's function is incorporated into a set of integral equations for the boundary fluxes. Through a numerical Fourier transform inversion and subsequent matrix inversion for the boundary fluxes, a semi-analytical benchmark emerges. Multidimensional solutions in a variety of infinite media are also based on the slab Green's function. In a second approach, a new converged SN method is developed. In this method, the SN solution is ''minded'' to bring out hidden high quality solutions. For this case multigroup fixed source and criticality transport problems are considered. Remarkably accurate solutions can be obtained with this new method called the Multigroup Converged SN (MGCSN) method as will be demonstrated

  13. The stainless steel bulk shielding benchmark experiment at the Frascati Neutron Generator (FNG)

    International Nuclear Information System (INIS)

    Batistoni, P.; Angelone, M.; Martone, M.; Petrizzi, L.; Pillon, M.; Rado, V.; Santamarina, A.; Abidi, I.; Gastaldi, G.; Joyer, P.; Marquette, J.P.; Martini, M.

    1994-01-01

    In the framework of the European Technology Program for NET/ITER, ENEA (Ente Nazionale per le Nuove Tecnologie, l'Energia e l'Ambiente), Frascati and CEA (Commissariat a l'Energie Atomique), Cadarache, are collaborating on a bulk shielding benchmark experiment using the 14 MeV Frascati Neutron Generator (FNG). The aim of the experiment is to obtain accurate experimental data for improving the nuclear database and methods used in the shielding designs, through a rigorous analysis of the results. The experiment consists of the irradiation of a stainless steel block by 14 MeV neutrons. The neutron flux and spectra at different depths, up to 65 cm inside the block, are measured by fission chambers and activation foils characterized by different energy response ranges. The γ-ray dose measurements are performed with ionization chambers and thermo-luminescent dosimeters (TLD). The first results are presented, as well as the comparison with calculations using the cross section library EFF (European Fusion File). ((orig.))

  14. Atomic Energy Research benchmark activity

    International Nuclear Information System (INIS)

    Makai, M.

    1998-01-01

    The test problems utilized in the validation and verification process of computer programs in Atomic Energie Research are collected into one bunch. This is the first step towards issuing a volume in which tests for VVER are collected, along with reference solutions and a number of solutions. The benchmarks do not include the ZR-6 experiments because they have been published along with a number of comparisons in the Final reports of TIC. The present collection focuses on operational and mathematical benchmarks which cover almost the entire range of reaktor calculation. (Author)

  15. Critical experiments on single-unit spherical plutonium geometries reflected and moderated by oil

    International Nuclear Information System (INIS)

    Rothe, R.E.

    1997-05-01

    Experimental critical configurations are reported for several dozen spherical and hemispherical single-unit assemblies of plutonium metal. Most were solid but many were hollow-centered, thick, shell-like geometries. All were constructed of nested plutonium (mostly 2139 Pu) metal hemispherical shells. Three kinds of critical configurations are reported. Two required interpolation and/or extrapolation of data to obtain the critical mass because reflector conditions were essentially infinite. The first finds the plutonium essentially fully reflected by a hydrogen-rich oil; the second is essentially unreflected. The third kind reports the critical oil reflector height above a large plutonium metal assembly of accurately known mass (no interpolation required) when that mass was too great to permit full oil reflection. Some configurations had thicknesses of mild steel just outside the plutonium metal, separating it from the oil. These experiments were performed at the Rocky Flats Critical Mass Laboratory in the late 1960s. They have not been published in a form suitable for benchmark-quality comparisons against state-of-the-art computational techniques until this paper. The age of the data and other factors lead to some difficulty in reconstructing aspects of the program and may, in turn, decrease confidence in certain details. Whenever this is true, the point is acknowledged. The plutonium metal was alpha-phase 239 Pu containing 5.9 wt-% 240 Pu. All assemblies were formed by nesting 1.667-mm-thick (nominal) bare plutonium metal hemispherical shells, also called hemishells, until the desired configuration was achieved. Very small tolerance gaps machined into radial dimensions reduced the effective density a small amount in all cases. Steel components were also nested hemispherical shells; but these were nominally 3.333-mm thick. Oil was used as the reflector because of its chemical compatibility with plutonium metal

  16. HELIOS2: Benchmarking against experiments for hexagonal and square lattices

    International Nuclear Information System (INIS)

    Simeonov, T.

    2009-01-01

    HELIOS2, is a 2D transport theory program for fuel burnup and gamma-flux calculation. It solves the neutron and gamma transport equations in a general, two-dimensional geometry bounded by a polygon of straight lines. The applied transport solver may be chosen between: The Method of Collision Probabilities (CP) and The Method of Characteristics(MoC). The former is well known for its successful application for preparation of cross section data banks for 3D simulators for all types lattices for WWERs, PWRs, BWRs, AGRs, RBMK and CANDU reactors. The later, MoC, helps in the areas where the requirements of CP for computational power become too large of practical application. The application of HELIOS2 and The Method of Characteristics for some large from calculation point of view benchmarks is presented in this paper. The analysis combines comparisons to measured data from the Hungarian ZR-6 reactor and JAERI facility of Tank type Critical Assembly (TCA) to verify and validate HELIOS2 and MOC for WWER assembly imitators; configurations with different absorber types- ZrB 2 , B 4 C, Eu 2 O 3 and Gd 2 O 3 ; and critical configurations with stainless steel in the reflector. Core eigenvalues and reaction rates are compared. With the account for the uncertainties the results are generally excellent. Special place in this paper is given to the effect of Iron-made radial reflector. Comparisons to measurements from TIC and TCA for stainless steel and Iron reflected cores are presented. The calculated by HELIOS-2 reactivity effect is in very good agreement with the measurements. (author)

  17. Criticality experiment for No.2 core of DF-VI fast neutron criticality facility

    International Nuclear Information System (INIS)

    Yang Lijun; Liu Zhenhua; Yan Fengwen; Luo Zhiwen; Chu Chun; Liang Shuhong

    2007-01-01

    At the completion of the DF-VI fast neutron criticality facility, its core changed, and it was restarted and a series of experiments and measurements were made. According to the data from 29 criticality experiments, the criticality element number and mass were calculated, the control rod reactivity worth were measured by period method and rod compensate method, reactivity worth of safety rod and safety block were measured using reactivity instrument; the reactivity worth of outer elements and radial distribution of elements were measured too. Based on all the measurements mentioned above, safety operation parameters for core 2 in DF-VI fast neutron criticality facility were conformed. (authors)

  18. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  19. Jendl-3.1 iron validation on the PCA-REPLICA (H{sub 2}O/Fe) shielding benchmark experiment

    Energy Technology Data Exchange (ETDEWEB)

    Pescarini, M.; Borgia, M. G. [ENEA, Centro Ricerche ``Ezio Clementel``, Bologna (Italy). Dipt. Energia

    1997-03-01

    The PCA-REPLICA (H{sub 2}O/Fe) neutron shielding benchmarks experiment is analysed using the SN 2-D DOT 3.5-E code and the 3-D-equivalent flux synthesis method. This engineering benchmark reproduces the ex-core radial geometry of a PWR, including a mild steel reactor pressure vessel (RPV) simulator, and is designed to test the accuracy of the calculation of the in-vessel neutron exposure parameters. This accuracy is strongly dependent on the quality of the iron neutron cross sections used to describe the nuclear reactions within the RPV simulator. In particular, in this report, the cross sections based on the JENDL-3.1 iron data files are tested, through a comparison of the calculated integral and spectral results with the corresponding experimental data. In addition, the present results are compared, on the same benchmark experiment, with those of a preceding ENEA-Bologna validation of the ENDF/B VI iron cross sections. The integral result comparison indicates that, for all the threshold detectors considered (Rh-103 (n, n`) Rh-103m, In-115 (n, n`) In-115m and S-32 (n, p) P-32), the JENDL-3.1 natural iron data produce satisfactory results similar to those obtained with the ENDF/B VI iron data. On the contrary, when the JENDL/3.1 Fe-56 data file is used, strongly underestimated results are obtained for the lower energy threshold detectors, Rh-103 and In-115. This fact, in particular, becomes more evident with increasing the neutron penetration depth in the RPV simulator.

  20. What's so critical about Critical Neuroscience? Rethinking experiment, enacting critique.

    Science.gov (United States)

    Fitzgerald, Des; Matusall, Svenja; Skewes, Joshua; Roepstorff, Andreas

    2014-01-01

    In the midst of on-going hype about the power and potency of the new brain sciences, scholars within "Critical Neuroscience" have called for a more nuanced and sceptical neuroscientific knowledge-practice. Drawing especially on the Frankfurt School, they urge neuroscientists towards a more critical approach-one that re-inscribes the objects and practices of neuroscientific knowledge within webs of social, cultural, historical and political-economic contingency. This paper is an attempt to open up the black-box of "critique" within Critical Neuroscience itself. Specifically, we argue that limiting enactments of critique to the invocation of context misses the force of what a highly-stylized and tightly-bound neuroscientific experiment can actually do. We show that, within the neuroscientific experiment itself, the world-excluding and context-denying "rules of the game" may also enact critique, in novel and surprising forms, while remaining formally independent of the workings of society, and culture, and history. To demonstrate this possibility, we analyze the Optimally Interacting Minds (OIM) paradigm, a neuroscientific experiment that used classical psychophysical methods to show that, in some situations, people worked better as a collective, and not as individuals-a claim that works precisely against reactionary tendencies that prioritize individual over collective agency, but that was generated and legitimized entirely within the formal, context-denying conventions of neuroscientific experimentation. At the heart of this paper is a claim that it was precisely the rigors and rules of the experimental game that allowed these scientists to enact some surprisingly critical, and even radical, gestures. We conclude by suggesting that, in the midst of large-scale neuroscientific initiatives, it may be "experiment", and not "context", that forms the meeting-ground between neuro-biological and socio-political research practices.

  1. Program of nuclear criticality safety experiment at JAERI

    International Nuclear Information System (INIS)

    Kobayashi, Iwao; Tachimori, Shoichi; Takeshita, Isao; Suzaki, Takenori; Ohnishi, Nobuaki

    1983-11-01

    JAERI is promoting the nuclear criticality safety research program, in which a new facility for criticality safety experiments (Criticality Safety Experimental Facility : CSEF) is to be built for the experiments with solution fuel. One of the experimental researches is to measure, collect and evaluate the experimental data needed for evaluation of criticality safety of the nuclear fuel cycle facilities. Another research area is a study of the phenomena themselves which are incidental to postulated critical accidents. Investigation of the scale and characteristics of the influences caused by the accident is also included in this research. The result of the conceptual design of CSEF is summarized in this report. (author)

  2. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-03-01

    The purpose of this research is to provide a fundamental computational investigation into the possible integration of experimental activities with the Advanced Test Reactor Critical (ATR-C) facility with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of integral data for improving neutron cross sections. Further assessment of oscillation

  3. Critical mass experiment using U-235 foils and lucite plates

    International Nuclear Information System (INIS)

    Sanchez, R.; Butterfield, K.; Kimpland, R.; Jaegers, P.

    1998-01-01

    The main objective of this experiment was to show how the multiplication of the system increases as moderated material is placed between highly enriched uranium foils. In addition, this experiment served to demonstrate the hand-stacking techniques, and approach to criticality by remote operation. This experiment was designed by Tom McLaughlin in the mid seventies as part of the criticality safety course that is taught at Los Alamos Critical Experiment Facility (LACEF). The W-U-235 ratio for this experiment was 215 which is where the minimum critical mass for this configuration occurs

  4. Critical mass experiment using 235U foils and lucite plates

    International Nuclear Information System (INIS)

    Sanchez, R.; Butterfield, K.; Kimpland, R.; Jaegers, P.

    1998-01-01

    This experiment demonstrated how the neutron multiplication of a system increases as moderated material is placed between highly enriched uranium foils. In addition, this experiment served to demonstrate the hand-stacking technique and approach to criticality be remote operation. This experiment was designed by McLaughlin in the mid-seventies as part of the criticality safety course that is taught at the Los Alamos Critical Experiments Facility. The H/ 235 U ratio for this experiment was 215, which is the ratio at which the minimum critical mass for this configuration occurs

  5. Criticality and Its Uncertainty Analysis of Spent Fuel Storage Rack for Research Reactor

    International Nuclear Information System (INIS)

    Han, Tae Young; Park, Chang Je; Lee, Byung Chul

    2011-01-01

    For evaluating the criticality safety of spent fuel storage rack in an open pool type research reactor, a permissible upper limit of criticality should be determined. It can be estimated from the criticality upper limit presented by the regulatory guide and an uncertainty of criticality calculation. In this paper, criticalities for spent fuel storage rack are carried out at various conditions. The calculation uncertainty of MCNP system is evaluated from the calculation results for the benchmark experiments. Then, the upper limit of criticality is determined from the uncertainties and the calculated criticality of the spent fuel storage rack is evaluated

  6. Simplified two and three dimensional HTTR benchmark problems

    International Nuclear Information System (INIS)

    Zhang Zhan; Rahnema, Farzad; Zhang Dingkang; Pounders, Justin M.; Ougouag, Abderrafi M.

    2011-01-01

    To assess the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of whole core configurations. In this paper we have created two and three dimensional numerical benchmark problems typical of high temperature gas cooled prismatic cores. Additionally, a single cell and single block benchmark problems are also included. These problems were derived from the HTTR start-up experiment. Since the primary utility of the benchmark problems is in code-to-code verification, minor details regarding geometry and material specification of the original experiment have been simplified while retaining the heterogeneity and the major physics properties of the core from a neutronics viewpoint. A six-group material (macroscopic) cross section library has been generated for the benchmark problems using the lattice depletion code HELIOS. Using this library, Monte Carlo solutions are presented for three configurations (all-rods-in, partially-controlled and all-rods-out) for both the 2D and 3D problems. These solutions include the core eigenvalues, the block (assembly) averaged fission densities, local peaking factors, the absorption densities in the burnable poison and control rods, and pin fission density distribution for selected blocks. Also included are the solutions for the single cell and single block problems.

  7. Benchmarking is associated with improved quality of care in type 2 diabetes: the OPTIMISE randomized, controlled trial.

    Science.gov (United States)

    Hermans, Michel P; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos

    2013-11-01

    To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile.

  8. Comparative analysis of nine structural codes used in the second WIPP benchmark problem

    International Nuclear Information System (INIS)

    Morgan, H.S.; Krieg, R.D.; Matalucci, R.V.

    1981-11-01

    In the Waste Isolation Pilot Plant (WIPP) Benchmark II study, various computer codes were compared on the basis of their capabilities for calculating the response of hypothetical drift configurations for nuclear waste experiments and storage demonstration. The codes used by participants in the study were ANSALT, DAPROK, JAC, REM, SANCHO, SPECTROM, STEALTH, and two different implementations of MARC. Errors were found in the preliminary results, and several calculations were revised. Revised solutions were in reasonable agreement except for the REM solution. The Benchmark II study allowed significant advances in understanding the relative behavior of computer codes available for WIPP calculations. The study also pointed out the possible need for performing critical design calculations with more than one code. Lastly, it indicated the magnitude of the code-to-code spread in results which is to be expected even when a model has been explicitly defined

  9. Reactor fuel depletion benchmark of TINDER

    International Nuclear Information System (INIS)

    Martin, W.J.; Oliveira, C.R.E. de; Hecht, A.A.

    2014-01-01

    Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO 2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work

  10. IEA Wind Task 23 Offshore Wind Technology and Deployment. Subtask 1 Experience with Critical Deployment Issues. Final Technical Report

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard

    The final report for IEA Wind Task 23, Offshore Wind Energy Technology and Deployment, is made up of two separate reports: Subtask 1: Experience with Critical Deployment Issues and Subtask 2: Offshore Code Comparison Collaborative (OC3). The Subtask 1 report included here provides background...... information and objectives of Task 23. It specifically discusses ecological issues and regulation, electrical system integration and offshore wind, external conditions, and key conclusions for Subtask 1. The Subtask 2 report covers OC3 background information and objectives of the task, OC3 benchmark exercises...... of aero-elastic offshore wind turbine codes, monopile foundation modeling, tripod support structure modeling, and Phase IV results regarding floating wind turbine modeling....

  11. Quality in E-Learning--A Conceptual Framework Based on Experiences from Three International Benchmarking Projects

    Science.gov (United States)

    Ossiannilsson, E.; Landgren, L.

    2012-01-01

    Between 2008 and 2010, Lund University took part in three international benchmarking projects, "E-xcellence+," the "eLearning Benchmarking Exercise 2009," and the "First Dual-Mode Distance Learning Benchmarking Club." A comparison of these models revealed a rather high level of correspondence. From this finding and…

  12. Copper benchmark experiment for the testing of JEFF-3.2 nuclear data for fusion applications

    OpenAIRE

    Angelone, M.; Flammini, D.; Loreti, S.; Moro, F.; Pillon, M.; Villar, R.; Klix, A.; Fischer, U.; Kodeli, I.; Perel, R.L.; Pohorecky, W.

    2017-01-01

    A neutronics benchmark experiment on a pure Copper block (dimensions 60 × 70 × 70 cm3) aimed at testing and validating the recent nuclear data libraries for fusion applications was performed in the frame of the European Fusion Program at the 14 MeV ENEA Frascati Neutron Generator (FNG). Reaction rates, neutron flux spectra and doses were measured using different experimental techniques (e.g. activation foils techniques, NE213 scintillator and thermoluminescent detectors). This paper first sum...

  13. Boiling water reactor turbine trip (TT) benchmark

    International Nuclear Information System (INIS)

    2001-06-01

    In the field of coupled neutronics/thermal-hydraulics computation there is a need to enhance scientific knowledge in order to develop advanced modelling techniques for new nuclear technologies and concepts, as well as for current nuclear applications Recently developed 'best-estimate' computer code systems for modelling 3-D coupled neutronics/thermal-hydraulics transients in nuclear cores and for the coupling of core phenomena and system dynamics (PWR, BWR, VVER) need to be compared against each other and validated against results from experiments. International benchmark studies have been set up for the purpose. The present volume describes the specification of such a benchmark. The transient addressed is a turbine trip (TT) in a BWR involving pressurization events in which the coupling between core phenomena and system dynamics plays an important role. In addition, the data made available from experiments carried out at the plant make the present benchmark very valuable. The data used are from events at the Peach Bottom 2 reactor (a GE-designed BWR/4). (authors)

  14. Monte Carlo benchmarking: Validation and progress

    International Nuclear Information System (INIS)

    Sala, P.

    2010-01-01

    Document available in abstract form only. Full text of publication follows: Calculational tools for radiation shielding at accelerators are faced with new challenges from the present and next generations of particle accelerators. All the details of particle production and transport play a role when dealing with huge power facilities, therapeutic ion beams, radioactive beams and so on. Besides the traditional calculations required for shielding, activation predictions have become an increasingly critical component. Comparison and benchmarking with experimental data is obviously mandatory in order to build up confidence in the computing tools, and to assess their reliability and limitations. Thin target particle production data are often the best tools for understanding the predictive power of individual interaction models and improving their performances. Complex benchmarks (e.g. thick target data, deep penetration, etc.) are invaluable in assessing the overall performances of calculational tools when all ingredients are put at work together. A review of the validation procedures of Monte Carlo tools will be presented with practical and real life examples. The interconnections among benchmarks, model development and impact on shielding calculations will be highlighted. (authors)

  15. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  16. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  17. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  18. Neutron Activation Foil and Thermoluminescent Dosimeter Responses to a Lead Reflected Pulse of the CEA Valduc SILENE Critical Assembly

    International Nuclear Information System (INIS)

    Miller, Thomas Martin; Celik, Cihangir; Isbell, Kimberly McMahan; Lee, Yi-kang; Gagnier, Emmanuel; Authier, Nicolas; Piot, Jerome; Jacquet, Xavier; Rousseau, Guillaume; Reynolds, Kevin H.

    2016-01-01

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 13, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). The goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. The CIDAS detects gammas with a Geiger-Muller tube, and the Rocky Flats detects neutrons via charged particles produced in a thin 6 LiF disc, depositing energy in a Si solid-state detector. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.

  19. Neutron Activation Foil and Thermoluminescent Dosimeter Responses to a Polyethylene Reflected Pulse of the CEA Valduc SILENE Critical Assembly

    International Nuclear Information System (INIS)

    Miller, Thomas Martin; Celik, Cihangir; McMahan, Kimberly L.; Lee, Yi-kang; Gagnier, Emmanuel; Authier, Nicolas

    2016-01-01

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 19, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). The goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. The CIDAS detects gammas with a Geiger-Muller tube and the Rocky Flats detects neutrons via charged particles produced in a thin "6LiF disc depositing energy in a Si solid state detector. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.

  20. Neutron Activation Foil and Thermoluminescent Dosimeter Responses to a Polyethylene Reflected Pulse of the CEA Valduc SILENE Critical Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Celik, Cihangir [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); McMahan, Kimberly L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lee, Yi-kang [French Atomic Energy Commission (CEA), Saclay (France); Gagnier, Emmanuel [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette; Authier, Nicolas [French Atomic Energy Commission (CEA), Salives (France). Valduc Centre for Nuclear Studies; Piot, Jerome [French Atomic Energy Commission (CEA), Salives (France). Valduc Centre for Nuclear Studies; Jacquet, Xavier [French Atomic Energy Commission (CEA), Salives (France). Valduc Centre for Nuclear Studies; Rousseau, Guillaume [French Atomic Energy Commission (CEA), Salives (France). Valduc Centre for Nuclear Studies; Reynolds, Kevin H. [Y-12 National Security Complex, Oak Ridge, TN (United States)

    2016-09-01

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 19, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). The goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. The CIDAS detects gammas with a Geiger-Muller tube and the Rocky Flats detects neutrons via charged particles produced in a thin 6LiF disc depositing energy in a Si solid state detector. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.

  1. Neutron Activation Foil and Thermoluminescent Dosimeter Responses to a Lead Reflected Pulse of the CEA Valduc SILENE Critical Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Celik, Cihangir [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Isbell, Kimberly McMahan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lee, Yi-kang [Commissariat a l' Energie Atomique et aux Energies Alternatives (CEA-Saclay), Gif-sur-Yvette (France); Gagnier, Emmanuel [Commissariat a l' Energie Atomique et aux Energies Alternatives (CEA-Saclay), Gif-sur-Yvette (France); Authier, Nicolas [Commissariat a l' Energie Atomique et aux Energies Alternatives (CEA-Saclay), Gif-sur-Yvette (France); Piot, Jerome [Commissariat a l' Energie Atomique et aux Energies Alternatives (CEA-Saclay), Gif-sur-Yvette (France); Jacquet, Xavier [Commissariat a l' Energie Atomique et aux Energies Alternatives (CEA-Saclay), Gif-sur-Yvette (France); Rousseau, Guillaume [Commissariat a l' Energie Atomique et aux Energies Alternatives (CEA-Saclay), Gif-sur-Yvette (France); Reynolds, Kevin H. [Y-12 National Security Complex, Oak Ridge, TN (United States)

    2016-09-01

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 13, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). The goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. The CIDAS detects gammas with a Geiger-Muller tube, and the Rocky Flats detects neutrons via charged particles produced in a thin 6LiF disc, depositing energy in a Si solid-state detector. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.

  2. Benchmarking local healthcare-associated infections: Available benchmarks and interpretation challenges

    Directory of Open Access Journals (Sweden)

    Aiman El-Saed

    2013-10-01

    Full Text Available Summary: Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI, which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons. Keywords: Benchmarking, Comparison, Surveillance, Healthcare-associated infections

  3. Nutrient cycle benchmarks for earth system land model

    Science.gov (United States)

    Zhu, Q.; Riley, W. J.; Tang, J.; Zhao, L.

    2017-12-01

    Projecting future biosphere-climate feedbacks using Earth system models (ESMs) relies heavily on robust modeling of land surface carbon dynamics. More importantly, soil nutrient (particularly, nitrogen (N) and phosphorus (P)) dynamics strongly modulate carbon dynamics, such as plant sequestration of atmospheric CO2. Prevailing ESM land models all consider nitrogen as a potentially limiting nutrient, and several consider phosphorus. However, including nutrient cycle processes in ESM land models potentially introduces large uncertainties that could be identified and addressed by improved observational constraints. We describe the development of two nutrient cycle benchmarks for ESM land models: (1) nutrient partitioning between plants and soil microbes inferred from 15N and 33P tracers studies and (2) nutrient limitation effects on carbon cycle informed by long-term fertilization experiments. We used these benchmarks to evaluate critical hypotheses regarding nutrient cycling and their representation in ESMs. We found that a mechanistic representation of plant-microbe nutrient competition based on relevant functional traits best reproduced observed plant-microbe nutrient partitioning. We also found that for multiple-nutrient models (i.e., N and P), application of Liebig's law of the minimum is often inaccurate. Rather, the Multiple Nutrient Limitation (MNL) concept better reproduces observed carbon-nutrient interactions.

  4. Plans and equipment for criticality measurements on plutonium-uranium nitrate solutions

    International Nuclear Information System (INIS)

    Lloyd, R.C.; Clayton, E.D.; Durst, B.M.

    1982-01-01

    Data from critical experiments are required on the criticality of plutonium-uranium nitrate solutions to accurately establish criticality control limits for use in processing and handling of breeder type fuels. Since the fuel must be processed both safely and economically, it is necessary that criticality considerations be based on accurate experimental data. Previous experiments have been reported on plutonium-uranium solutions with Pu weight ratios extending up to some 38 wt %. No data have been presented, however, for plutonium-uranium nitrate solutions beyond this Pu weight ratio. The current research emphasis is on the procurement of criticality data for plutonium-uranium mixtures up to 60 wt % Pu that will serve as the basis for handling criticality problems subsequently encountered in the development of technology for the breeder community. Such data also will provide necessary benchmarks for data testing and analysis on integral criticality experiments for verification of the analytical techniques used in support of criticality control. Experiments are currently being performed with plutonium-uranium nitrate solutions in stainless steel cylindrical vessels and an expandable slab tank system. A schematic of the experimental systems is presented

  5. The Benchmark experiment on stainless steel bulk shielding at the Frascati neutron generator

    International Nuclear Information System (INIS)

    Batistoni, P.; Angelone, M.; Martone, M.; Pillon, M.; Rado, V.

    1994-11-01

    In the framework of the European Technology Program for NET/ITER, ENEA (Italian Agency for New Technologies, Energy and Environment) - Frascati and CEA (Commissariat a L'Energie Atomique) - Cadarache collaborated on a Bulk Shield Benchmark Experiment using the 14-MeV Frascati Neutron Generator (FNG). The aim of the experiment was to obtain accurate experimental data for improving the nuclear database and methods used in shielding designs, through a rigorous analysis of the results. The experiment consisted of the irradiation of a stainless steel block by 14-MeV neutrons. The neutron reaction rates at different depths inside the block were measured by fission chambers and activation foils characterized by different energy response ranges. The experimental results have been compared with numerical results calculated using both S N and Monte Carlo transport codes and as transport cross section library the European Fusion File (EFF). In particular, the present report describes the experimental and numerical activity, including neutron measurements and Monte Carlo calculations, carried out by the ENEA Italian Agency for New Technologies, Energy and Environment) team

  6. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-07-01

    The purpose of this document is to identify some suggested types of experiments that can be performed in the Advanced Test Reactor Critical (ATR-C) facility. A fundamental computational investigation is provided to demonstrate possible integration of experimental activities in the ATR-C with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of

  7. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  8. Decoys Selection in Benchmarking Datasets: Overview and Perspectives

    Science.gov (United States)

    Réau, Manon; Langenfeld, Florent; Zagury, Jean-François; Lagarde, Nathalie; Montes, Matthieu

    2018-01-01

    Virtual Screening (VS) is designed to prospectively help identifying potential hits, i.e., compounds capable of interacting with a given target and potentially modulate its activity, out of large compound collections. Among the variety of methodologies, it is crucial to select the protocol that is the most adapted to the query/target system under study and that yields the most reliable output. To this aim, the performance of VS methods is commonly evaluated and compared by computing their ability to retrieve active compounds in benchmarking datasets. The benchmarking datasets contain a subset of known active compounds together with a subset of decoys, i.e., assumed non-active molecules. The composition of both the active and the decoy compounds subsets is critical to limit the biases in the evaluation of the VS methods. In this review, we focus on the selection of decoy compounds that has considerably changed over the years, from randomly selected compounds to highly customized or experimentally validated negative compounds. We first outline the evolution of decoys selection in benchmarking databases as well as current benchmarking databases that tend to minimize the introduction of biases, and secondly, we propose recommendations for the selection and the design of benchmarking datasets. PMID:29416509

  9. OECD/NEA burnup credit criticality benchmarks phase IIIB: Burnup calculations of BWR fuel assemblies for storage and transport

    International Nuclear Information System (INIS)

    Okuno, Hiroshi; Naito, Yoshitaka; Suyama, Kenya

    2002-02-01

    The report describes the final results of the Phase IIIB Benchmark conducted by the Expert Group on Burnup Credit Criticality Safety under the auspices of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD). The Benchmark was intended to compare the predictability of current computer code and data library combinations for the atomic number densities of an irradiated PWR fuel assembly model. The fuel assembly was irradiated under specific power of 25.6 MW/tHM up to 40 GWd/tHM and cooled for five years. The void fraction was assumed to be uniform throughout the channel box and constant, at 0, 40 and 70%, during burnup. In total, 16 results were submitted from 13 institutes of 7 countries. The calculated atomic number densities of 12 actinides and 20 fission product nuclides were found to be for the most part within a range of ±10% relative to the average, although some results, esp. 155 Eu and gadolinium isotopes, exceeded the band, which will require further investigation. Pin-wise burnup results agreed well among the participants. The results in the infinite neutron multiplication factor k ∞ also accorded well with each other for void fractions of 0 and 40%; however some results deviated from the averaged value noticeably for the void fraction of 70%. (author)

  10. OECD/NEA burnup credit criticality benchmarks phase IIIB. Burnup calculations of BWR fuel assemblies for storage and transport

    Energy Technology Data Exchange (ETDEWEB)

    Okuno, Hiroshi; Naito, Yoshitaka; Suyama, Kenya [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2002-02-01

    The report describes the final results of the Phase IIIB Benchmark conducted by the Expert Group on Burnup Credit Criticality Safety under the auspices of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD). The Benchmark was intended to compare the predictability of current computer code and data library combinations for the atomic number densities of an irradiated PWR fuel assembly model. The fuel assembly was irradiated under specific power of 25.6 MW/tHM up to 40 GWd/tHM and cooled for five years. The void fraction was assumed to be uniform throughout the channel box and constant, at 0, 40 and 70%, during burnup. In total, 16 results were submitted from 13 institutes of 7 countries. The calculated atomic number densities of 12 actinides and 20 fission product nuclides were found to be for the most part within a range of {+-}10% relative to the average, although some results, esp. {sup 155}Eu and gadolinium isotopes, exceeded the band, which will require further investigation. Pin-wise burnup results agreed well among the participants. The results in the infinite neutron multiplication factor k{sub {infinity}} also accorded well with each other for void fractions of 0 and 40%; however some results deviated from the averaged value noticeably for the void fraction of 70%. (author)

  11. Analysis of the BFS-62 critical experiment. A report produced for BNFL (Joint European contribution)

    International Nuclear Information System (INIS)

    Newton, T.D.; Hosking, J.G.; Smith, P.J.

    2004-01-01

    A benchmark analysis for a hybrid UOX/MOX fuelled core of the BN-600 reactor was proposed during the first Research Co-ordination Meeting of the IAEA Co-ordinated Research Project 'Updated Codes and Methods to Reduce Calculational Uncertainties of LMFR Reactivity Effects'. Phase 5 of the benchmark focuses on validation of calculated sodium void coefficient distributions and integral reactivity coefficients by comparison with experimental measurements made in the critical facility BFS-62. The European. participation in Phase 5 of the benchmark analyses consists of a joint contribution from France (CEA Cadarache) and the UK (Serco Assurance Winfrith - sponsored by BNFL). Calculations have been performed using the ERANOS code and data system, which has been developed in the framework of the European collaboration on fast reactors. Results are presented in this paper for the sodium void reactivity effect based on calculated values of the absolute core reactivity. The spatial distribution of the void effect, determined using first order perturbation theory with the diffusion theory approximation, is also presented

  12. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  13. TRIGA criticality experiment for testing burn-up calculations

    International Nuclear Information System (INIS)

    Persic, Andreja; Ravnik, Matjaz; Zagar, Tomaz

    1999-01-01

    A criticality experiment with partly burned TRIGA fuel is described. 20 wt % enriched standard TRIGA fuel elements initially containing 12 wt % U are used. Their average burn-up is 1.4 MWd. Fuel element burn-up is calculated in 2-D four group diffusion approximation using TRIGLAV code. The burn-up of several fuel elements is also measured by reactivity method. The excess reactivity of several critical and subcritical core configurations is measured. Two core configurations contain the same fuel elements in the same arrangement as were used in the fresh TRIGA fuel criticality experiment performed in 1991. The results of the experiment may be applied for testing the computer codes used for fuel burn-up calculations. (author)

  14. Los Alamos Critical Experiments Facility

    International Nuclear Information System (INIS)

    Malenfant, R.E.

    1991-01-01

    The Critical Experiments Facility of the Los Alamos National Laboratory has been in existence for 45 years. In that period of time, thousands of measurements have been made on assemblies containing every fissionable material in various configurations that included bare metal and compounds of the nitrate, sulfate, fluoride, carbide, and oxide. Techniques developed or applied include Rossi-α, source-jerk, rod oscillator, and replacement measurements. Many of the original measurements of delay neutrons were performed at the site, and a replica of the Hiroshima weapon was operated at steady state to assist in evaluating the relative biological effectiveness (RBE) of neutrons. Solid, liquid, and gas fissioning systems were run at critical. Operation of this original critical facility has demonstrated the margin of safety that can be obtained through remote operation. Eight accidental excursions have occurred on the site, ranging from 1.5 x 10 16 to 1.2 x 10 17 fissions, with no significant exposure to personnel or damage to the facility beyond the machines themselves -- and in only one case was the machine damaged beyond further use. The present status of the facility, operating procedures, and complement of machines will be described in the context of programmatic activity. New programs will focus on training, validation of criticality alarm systems, experimental safety assessment of process applications, and dosimetry. Special emphasis will be placed on the incorporation of experience from 45 years of operation into present procedures and programs. 3 refs

  15. Benchmark test of JEF-1 evaluation by calculating fast criticalities

    International Nuclear Information System (INIS)

    Pelloni, S.

    1986-06-01

    JEF-1 basic evaluation was tested by calculating fast critical experiments using the cross section discrete-ordinates transport code ONEDANT with P/sub 3/S/sub 16/ approximation. In each computation a spherical one dimensional model was used, together with a 174 neutron group VITAMIN-E structured JEF-1 based nuclear data library, generated at EIR with NJOY and TRANSX-CTR. It is found that the JEF-1 evaluation gives accurate results comparable with ENDF/B-V and that eigenvalues agree well within 10 mk whereas reaction rates deviate by up to 10% from the experiment. U-233 total and fission cross sections seem to be underestimated in the JEF-1 evaluation in the fast energy range between 0.1 and 1 MeV. This confirms previous analysis based on diffusion theory with 71 neutron groups, performed by H. Takano and E. Sartori at NEA Data Bank. (author)

  16. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  17. Benchmark validation by means of pulsed sphere experiment at OKTAVIAN

    Energy Technology Data Exchange (ETDEWEB)

    Ichihara, Chihiro [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst.; Hayashi, Shu A.; Kimura, Itsuro; Yamamoto, Junji; Takahashi, Akito

    1998-03-01

    In order to make benchmark validation of the existing evaluated nuclear data for fusion related material, neutron leakage spectra from spherical piles were measured with a time-of-flight technique using the intense 14 MeV neutron source, OKTAVIAN in the energy range from 0.1 to 15 MeV. The neutron energy spectra were obtained as the absolute value normalized per the source neutron. The measured spectra were compared with those by theoretical calculation using a Monte Carlo neutron transport code, MCNP with several libraries processed from the evaluated nuclear data files. Comparison has been made with the spectrum shape, the C/E values of neutron numbers integrated in 4 energy regions and the calculated spectra unfolded by the number of collisions, especially those after a single collision. The new libraries predicted the experiment fairly well for Li, Cr, Mn, Cu and Mo. For Al, Si, Zr, Nb and W, new data files could give fair prediction. However, C/E differed more than 20% for several regions. For LiF, CF{sub 2}, Ti and Co, no calculation could predict the experiment. The detailed discussion has been given for Cr, Mn and Cu samples. EFF-2 calculation overestimated by 24% for the Cr experiment between 1 and 5-MeV neutron energy region, presumably because of overestimation of inelastic cross section and {sup 52}Cr(n,2n) cross section and the problem in energy and angular distribution of secondary neutrons in EFF-2. For Cu, ENDF/B-VI and EFF-2 overestimated the experiment by about 20 to 30-% in the energy range between 5 and 12-MeV, presumably from the problem in inelastic scattering cross section. (author)

  18. Medical Education to Enhance Critical Consciousness: Facilitators' Experiences.

    Science.gov (United States)

    Zaidi, Zareen; Vyas, Rashmi; Verstegen, Danielle; Morahan, Page; Dornan, Tim

    2017-11-01

    To analyze educators' experiences of facilitating cultural discussions in two global health professions education programs and what these experiences had taught them about critical consciousness. A multicultural research team conducted in-depth interviews with 16 faculty who had extensive experience facilitating cultural discussions. They analyzed transcripts of the interviews thematically, drawing sensitizing insights from Gramsci's theory of cultural hegemony. Collaboration and conversation helped the team self-consciously examine their positions toward the data set and be critically reflexive. Participant faculty used their prior experience facilitating cultural discussions to create a "safe space" in which learners could develop critical consciousness. During multicultural interactions they recognized and explicitly addressed issues related to power differentials, racism, implicit bias, and gender bias. They noted the need to be "facile in attending to pain" as learners brought up traumatic experiences and other sensitive issues including racism and the impact of power dynamics. They built relationships with learners by juxtaposing and exploring the sometimes-conflicting norms of different cultures. Participants were reflective about their own understanding and tendency to be biased. They aimed to break free of such biases while role modeling how to have the courage to speak up. Experience had given facilitators in multicultural programs an understanding of their responsibility to promote critical consciousness and social justice. How faculty without prior experience or expertise could develop those values and skills is a topic for future research.

  19. The Benchmarking of Integrated Business Structures

    Directory of Open Access Journals (Sweden)

    Nifatova Olena M.

    2017-12-01

    Full Text Available The aim of the article is to study the role of the benchmarking in the process of integration of business structures in the aspect of knowledge sharing. The results of studying the essential content of the concept “integrated business structure” and its semantic analysis made it possible to form our own understanding of this category with an emphasis on the need to consider it in the plane of three projections — legal, economic and organizational one. The economic projection of the essential content of integration associations of business units is supported by the organizational projection, which is expressed through such essential aspects as existence of a single center that makes key decisions; understanding integration as knowledge sharing; using the benchmarking as exchange of experience on key business processes. Understanding the process of integration of business units in the aspect of knowledge sharing involves obtaining certain information benefits. Using the benchmarking as exchange of experience on key business processes in integrated business structures will help improve the basic production processes, increase the efficiency of activity of both the individual business unit and the IBS as a whole.

  20. Experiments on criticality carried out from 1975 till 1980

    International Nuclear Information System (INIS)

    Heinicke, W.; Tischer, A.; Weber, W.J.

    1981-11-01

    The report on hand includes the experiments on criticality published from 1975 till 1980. About 90 experiments with the most important related data are listed. They are capable of being called up, with the data base system KRITEXP, by 14 different descriptors or printed in any arrangement or order. This is the basis for a global or purposeful verification of the calculating method for criticality safety. The proof of reliability of the calculations for the criticality analysis are immediately relevant for the licencing procedure under atomic law for all plants of the nuclear fuel cycle where nuclear fuels are handled. Since no criticality experiments are being carried out in the Federal Republic of Germany, the data collection on hand will help to fill this gap with regard to the assessment of experiments carried out in other countries. (orig.) [de

  1. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    Burns, Phil; Jenkins, Cloda; Riechmann, Christoph

    2005-01-01

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  2. OECD/NEA BENCHMARK FOR UNCERTAINTY ANALYSIS IN MODELING (UAM FOR LWRS – SUMMARY AND DISCUSSION OF NEUTRONICS CASES (PHASE I

    Directory of Open Access Journals (Sweden)

    RYAN N. BRATTON

    2014-06-01

    Full Text Available A Nuclear Energy Agency (NEA, Organization for Economic Co-operation and Development (OECD benchmark for Uncertainty Analysis in Modeling (UAM is defined in order to facilitate the development and validation of available uncertainty analysis and sensitivity analysis methods for best-estimate Light water Reactor (LWR design and safety calculations. The benchmark has been named the OECD/NEA UAM-LWR benchmark, and has been divided into three phases each of which focuses on a different portion of the uncertainty propagation in LWR multi-physics and multi-scale analysis. Several different reactor cases are modeled at various phases of a reactor calculation. This paper discusses Phase I, known as the “Neutronics Phase”, which is devoted mostly to the propagation of nuclear data (cross-section uncertainty throughout steady-state stand-alone neutronics core calculations. Three reactor systems (for which design, operation and measured data are available are rigorously studied in this benchmark: Peach Bottom Unit 2 BWR, Three Mile Island Unit 1 PWR, and VVER-1000 Kozloduy-6/Kalinin-3. Additional measured data is analyzed such as the KRITZ LEU criticality experiments and the SNEAK-7A and 7B experiments of the Karlsruhe Fast Critical Facility. Analyzed results include the top five neutron-nuclide reactions, which contribute the most to the prediction uncertainty in keff, as well as the uncertainty in key parameters of neutronics analysis such as microscopic and macroscopic cross-sections, six-group decay constants, assembly discontinuity factors, and axial and radial core power distributions. Conclusions are drawn regarding where further studies should be done to reduce uncertainties in key nuclide reaction uncertainties (i.e.: 238U radiative capture and inelastic scattering (n, n’ as well as the average number of neutrons released per fission event of 239Pu.

  3. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  4. The Department of Energy nuclear criticality safety program

    International Nuclear Information System (INIS)

    Felty, J.R.

    2004-01-01

    This paper broadly covers key events and activities from which the Department of Energy Nuclear Criticality Safety Program (NCSP) evolved. The NCSP maintains fundamental infrastructure that supports operational criticality safety programs. This infrastructure includes continued development and maintenance of key calculational tools, differential and integral data measurements, benchmark compilation, development of training resources, hands-on training, and web-based systems to enhance information preservation and dissemination. The NCSP was initiated in response to Defense Nuclear Facilities Safety Board Recommendation 97-2, Criticality Safety, and evolved from a predecessor program, the Nuclear Criticality Predictability Program, that was initiated in response to Defense Nuclear Facilities Safety Board Recommendation 93-2, The Need for Critical Experiment Capability. This paper also discusses the role Dr. Sol Pearlstein played in helping the Department of Energy lay the foundation for a robust and enduring criticality safety infrastructure.

  5. What’s so Critical about Critical Neuroscience? -Rethinking Experiment, Enacting Critique

    Directory of Open Access Journals (Sweden)

    Des eFitzgerald

    2014-05-01

    Full Text Available In the midst of on-going hype about the power and potency of the new brain sciences, scholars within ‘Critical Neuroscience’ have called for a more nuanced and sceptical neuroscientific knowledge-practice. Drawing especially on the Frankfurt School, they urge neuroscientists towards a more critical approach – one that re-inscribes the objects and practices of neuroscientific knowledge within webs of social, cultural, historical and political-economic contingency. This paper is an attempt to open up the black-box of ‘critique’ within Critical Neuroscience itself. Specifically, we argue that limiting enactments of critique to the invocation of context misses the force of what a highly-stylized and tightly-bound neuroscientific experiment can actually do. We show that, within the neuroscientific experiment itself, the world-excluding and context-denying ‘rules of the game’ may also enact critique, in novel and surprising forms, while remaining formally independent of the workings of society, and culture, and history. To demonstrate this possibility, we analyze the Optimally Interacting Minds paradigm, a neuroscientific experiment that used classical psychophysical methods to show that, in some situations, people worked better as a collective, and not as individuals – a claim that works precisely against reactionary tendencies that prioritise individual over collective agency, but that was generated and legitimized entirely within the formal, context-denying conventions of neuroscientific experimentation. At the heart of this paper is a claim that it was precisely the rigours and rules of the experimental game that allowed these scientists to enact some surprisingly critical, and even radical, gestures. We conclude by suggesting that, in the midst of large-scale neuroscientific initiatives, it may be 'experiment,' and not 'context,' that forms the meeting-ground between neuro-biological and socio-political research practices.

  6. Critical experiments analysis by ABBN-90 constant system

    Energy Technology Data Exchange (ETDEWEB)

    Tsiboulia, A.; Nikolaev, M.N.; Golubev, V. [Institute of Physics and Power Engineering, Obninsk (Russian Federation)] [and others

    1997-06-01

    The ABBN-90 is a new version of the well-known Russian group-constant system ABBN. Included constants were calculated based on files of evaluated nuclear data from the BROND-2, ENDF/B-VI, and JENDL-3 libraries. The ABBN-90 is intended for the calculation of different types of nuclear reactors and radiation shielding. Calculations of criticality safety and reactivity accidents are also provided by using this constant set. Validation of the ABBN-90 set was made by using a computerized bank of evaluated critical experiments. This bank includes the results of experiments conducted in Russia and abroad of compact spherical assemblies with different reflectors, fast critical assemblies, and fuel/water-solution criticalities. This report presents the results of the calculational analysis of the whole collection of critical experiments. All calculations were produced with the ABBN-90 group-constant system. Revealed discrepancies between experimental and calculational results and their possible reasons are discussed. The codes and archives INDECS system is also described. This system includes three computerized banks: LEMEX, which consists of evaluated experiments and their calculational results; LSENS, which consists of sensitivity coefficients; and LUND, which consists of group-constant covariance matrices. The INDECS system permits us to estimate the accuracy of neutronics calculations. A discussion of the reliability of such estimations is finally presented. 16 figs.

  7. Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.

    Science.gov (United States)

    Macias, J.; Escalante, C.; Castro, M. J.

    2017-12-01

    Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  8. The benchmark experiment on slab beryllium with D–T neutrons for validation of evaluated nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Nie, Y., E-mail: nieyb@ciae.ac.cn [Science and Technology on Nuclear Data Laboratory, China Institute of Atomic Energy, Beijing 102413 (China); Ren, J.; Ruan, X.; Bao, J. [Science and Technology on Nuclear Data Laboratory, China Institute of Atomic Energy, Beijing 102413 (China); Han, R. [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Zhang, S. [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Inner Mongolia University for the Nationalities, Inner Mongolia, Tongliao 028000 (China); Huang, H.; Li, X. [Science and Technology on Nuclear Data Laboratory, China Institute of Atomic Energy, Beijing 102413 (China); Ding, Y. [Science and Technology on Nuclear Data Laboratory, China Institute of Atomic Energy, Beijing 102413 (China); School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Wu, H.; Liu, P.; Zhou, Z. [Science and Technology on Nuclear Data Laboratory, China Institute of Atomic Energy, Beijing 102413 (China)

    2016-04-15

    Highlights: • Evaluated data for beryllium are validated by a high precision benchmark experiment. • Leakage neutron spectra from pure beryllium slab are measured at 61° and 121° using time-of-flight method. • The experimental results are compared with the MCNP-4B calculations with the evaluated data from different libraries. - Abstract: Beryllium is the most favored neutron multiplier candidate for solid breeder blankets of future fusion power reactors. However, beryllium nuclear data are differently presented in modern nuclear data evaluations. In order to validate the evaluated nuclear data on beryllium, in the present study, a benchmark experiment has been performed at China Institution of Atomic Energy (CIAE). Neutron leakage spectra from pure beryllium slab samples were measured at 61° and 121° using time-of-flight method. The experimental results were compared with the calculated ones by MCNP-4B simulation, using the evaluated data of beryllium from the CENDL-3.1, ENDF/B-VII.1 and JENDL-4.0 libraries. From the comparison between the measured and the calculated spectra, it was found that the calculation results based on CENDL-3.1 caused overestimation in the energy range from about 3–12 MeV at 61°, while at 121°, all the libraries led to underestimation below 3 MeV.

  9. The benchmark experiment on slab beryllium with D–T neutrons for validation of evaluated nuclear data

    International Nuclear Information System (INIS)

    Nie, Y.; Ren, J.; Ruan, X.; Bao, J.; Han, R.; Zhang, S.; Huang, H.; Li, X.; Ding, Y.; Wu, H.; Liu, P.; Zhou, Z.

    2016-01-01

    Highlights: • Evaluated data for beryllium are validated by a high precision benchmark experiment. • Leakage neutron spectra from pure beryllium slab are measured at 61° and 121° using time-of-flight method. • The experimental results are compared with the MCNP-4B calculations with the evaluated data from different libraries. - Abstract: Beryllium is the most favored neutron multiplier candidate for solid breeder blankets of future fusion power reactors. However, beryllium nuclear data are differently presented in modern nuclear data evaluations. In order to validate the evaluated nuclear data on beryllium, in the present study, a benchmark experiment has been performed at China Institution of Atomic Energy (CIAE). Neutron leakage spectra from pure beryllium slab samples were measured at 61° and 121° using time-of-flight method. The experimental results were compared with the calculated ones by MCNP-4B simulation, using the evaluated data of beryllium from the CENDL-3.1, ENDF/B-VII.1 and JENDL-4.0 libraries. From the comparison between the measured and the calculated spectra, it was found that the calculation results based on CENDL-3.1 caused overestimation in the energy range from about 3–12 MeV at 61°, while at 121°, all the libraries led to underestimation below 3 MeV.

  10. SARNET2 benchmark on air ingress experiments QUENCH-10, -16

    International Nuclear Information System (INIS)

    Fernandez-Moguel, Leticia; Bals, Christine; Beuzet, Emilie; Bratfisch, Christian; Coindreau, Olivia; Hózer, Zoltan; Stuckert, Juri; Vasiliev, Alexander; Vryashkova, Petya

    2014-01-01

    Highlights: • Two similar QUENCH air ingress experiments were analysed with eight different codes. • Eight institutions have participated in the study. • Differences in the code were mostly small to moderate during the pre-oxidation. • Differences in the code were larger during the air phase. • Study has proven that there are physical processes that should be further studied. - Abstract: The QUENCH-10 (Q-10) and QUENCH-16 (Q-16) experiments were chosen as a SARNET2 code benchmark (SARNET2-COOL-D5.4) exercise to assess the status of modelling air ingress sequences and to compare the capabilities of the various codes used for accident analyses, specifically ATHLET-CD (GRS and RUB), ICARE-CATHARE (IRSN), MAAP (EDF), MELCOR (INRNE and PSI), SOCRAT (IBRAE), and RELAP/SCDAPSim (PSI). Both experiments addressed air ingress into an overheated core following earlier partial oxidation in steam. Q-10 was performed with extensive preoxidation, moderate/high air flow rate and high temperatures at onset of reflood (max T pct = 2200 K), while Q-16 was performed with limited preoxidation, low air flow rate and relative low temperatures at reflood initiation (max T pct = 1870 K). Variables relating to the major signatures (thermal response, hydrogen generation, oxide layer development, oxygen and nitrogen consumption and reflood behaviour) were compared globally and/or at selected locations. In each simulation, the same input models and assumptions are used for both experiments, differing only in respect of the boundary conditions. However, some slight idealisations were made to the assumed boundary conditions in order to avoid ambiguities in the code-to-code comparisons; in this way, it was possible to focus more easily on the key phenomena and hence make the results of the exercise more transparent. Remarks are made concerning the capability of physical modelling within the codes, description of the experiment facility and test conduct as specified in the code input

  11. Criticality experiments with fast flux test facility fuel pins

    International Nuclear Information System (INIS)

    Bierman, S.R.

    1990-11-01

    A United States Department of Energy program was initiated during the early seventies at the Hanford Critical Mass Laboratory to obtain experimental criticality data in support of the Liquid Metal Fast Breeder Reactor Program. The criticality experiments program was to provide basic physics data for clean well defined conditions expected to be encountered in the handling of plutonium-uranium fuel mixtures outside reactors. One task of this criticality experiments program was concerned with obtaining data on PuO 2 -UO 2 fuel rods containing 20--30 wt % plutonium. To obtain this data a series of experiments were performed over a period of about twelve years. The experimental data obtained during this time are summarized and the associated experimental assemblies are described. 8 refs., 7 figs

  12. Jezebel: Reconstructing a Critical Experiment from 60 Years Ago

    Energy Technology Data Exchange (ETDEWEB)

    Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-15

    The Jezebel experiment of 1954-1955 was a very small, nearly-spherical, nearly-bare (unreflected), nearly-homogeneous assembly of plutonium alloyed with gallium. This experiment was used to determine the critical mass of spherical, bare, homogeneous Pu-alloy. In 1956, the critical mass of Pu-alloy was determined to be 16.45 ± 0.05 kg. The experiment was reevaluated in 1969 using logbooks from the 1950s and updated nuclear cross sections. The critical mass of Pu-alloy was determined to be 16.57 ± 0.10 kg. In 2013, the 239Pu Jezebel experiment was again reevaluated, this time using detailed geometry and materials models and modern nuclear cross sections in high-fidelity Monte Carlo neutron transport calculations. Documentation from the 1950s was often inconsistent or missing altogether, and assumptions had to be made. The critical mass of Pu-alloy was determined to be 16.624 ± 0.075 kg. Historic documents were subsequently found that validated some of the 2013 assumptions and invalidated others. In 2016, the newly found information was used to once again reevaluate the 239Pu Jezebel experiment. The critical mass of Pu-alloy was determined to be 16.624 ± 0.065 kg. This talk will discuss each of these evaluations, focusing on the calculation of the uncertainty as well as the critical mass. We call attention to the ambiguity, consternation, despair, and euphoria involved in reconstructing the historic Jezebel experiment. This talk is quite accessible for undergraduate students as well as non-majors.

  13. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks, R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem

  14. International benchmark tests of the FENDL-1 Nuclear Data Library

    International Nuclear Information System (INIS)

    Fischer, U.

    1997-01-01

    An international benchmark validation task has been conducted to validate the fusion evaluated nuclear data library FENDL-1 through data tests against integral 14 MeV neutron experiments. The main objective of this task was to qualify the FENDL-1 working libraries for fusion applications and to elaborate recommendations for further data improvements. Several laboratories and institutions from the European Union, Japan, the Russian Federation and US have contributed to the benchmark task. A large variety of existing integral 14 MeV benchmark experiments was analysed with the FENDL-1 working libraries for continuous energy Monte Carlo and multigroup discrete ordinate calculations. Results of the benchmark analyses have been collected, discussed and evaluated. The major findings, conclusions and recommendations are presented in this paper. With regard to the data quality, it is summarised that fusion nuclear data have reached a high confidence level with the available FENDL-1 data library. With few exceptions this holds for the materials of highest importance for fusion reactor applications. As a result of the performed benchmark analyses, some existing deficiencies and discrepancies have been identified that are recommended for removal in theforthcoming FENDL-2 data file. (orig.)

  15. Academic Productivity in Psychiatry: Benchmarks for the H-Index.

    Science.gov (United States)

    MacMaster, Frank P; Swansburg, Rose; Rittenbach, Katherine

    2017-08-01

    Bibliometrics play an increasingly critical role in the assessment of faculty for promotion and merit increases. Bibliometrics is the statistical analysis of publications, aimed at evaluating their impact. The objective of this study is to describe h-index and citation benchmarks in academic psychiatry. Faculty lists were acquired from online resources for all academic departments of psychiatry listed as having residency training programs in Canada (as of June 2016). Potential authors were then searched on Web of Science (Thomson Reuters) for their corresponding h-index and total number of citations. The sample included 1683 faculty members in academic psychiatry departments. Restricted to those with a rank of assistant, associate, or full professor resulted in 1601 faculty members (assistant = 911, associate = 387, full = 303). h-index and total citations differed significantly by academic rank. Both were highest in the full professor rank, followed by associate, then assistant. The range in each, however, was large. This study provides the initial benchmarks for the h-index and total citations in academic psychiatry. Regardless of any controversies or criticisms of bibliometrics, they are increasingly influencing promotion, merit increases, and grant support. As such, benchmarking by specialties is needed in order to provide needed context.

  16. 2008 ULTRASONIC BENCHMARK STUDIES OF INTERFACE CURVATURE--A SUMMARY

    International Nuclear Information System (INIS)

    Schmerr, L. W.; Huang, R.; Raillon, R.; Mahaut, S.; Leymarie, N.; Lonne, S.; Song, S.-J.; Kim, H.-J.; Spies, M.; Lupien, V.

    2009-01-01

    In the 2008 QNDE ultrasonic benchmark session researchers from five different institutions around the world examined the influence that the curvature of a cylindrical fluid-solid interface has on the measured NDE immersion pulse-echo response of a flat-bottom hole (FBH) reflector. This was a repeat of a study conducted in the 2007 benchmark to try to determine the sources of differences seen in 2007 between model-based predictions and experiments. Here, we will summarize the results obtained in 2008 and analyze the model-based results and the experiments.

  17. Assessing reactor physics codes capabilities to simulate fast reactors on the example of the BN-600 benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, Vladimir [Scientific and Engineering Centre for Nuclear and Radiation Safety (SES NRS), Moscow (Russian Federation); Bousquet, Jeremy [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Garching (Germany)

    2016-11-15

    This work aims to assess the capabilities of reactor physics codes (initially validated for thermal reactors) to simulate fast sodium cooled reactors. The BFS-62-3A critical experiment from the BN-600 Hybrid Core Benchmark Analyses was chosen for the investigation. Monte-Carlo codes (KENO from SCALE and SERPENT 2.1.23) and the deterministic diffusion code DYN3D-MG are applied to calculate the neutronic parameters. It was found that the multiplication factor and reactivity effects calculated by KENO and SERPENT using the ENDF/B-VII.0 continuous energy library are in a good agreement with each other and with the measured benchmark values. Few-groups macroscopic cross sections, required for DYN3D-MG, were prepared in applying different methods implemented in SCALE and SERPENT. The DYN3D-MG results of a simplified benchmark show reasonable agreement with results from Monte-Carlo calculations and measured values. The former results are used to justify DYN3D-MG implementation for sodium cooled fast reactors coupled deterministic analysis.

  18. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes (''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Cask,'' R.E. Glass, Sandia National Laboratories, 1985; ''Sample Problem Manual for Benchmarking of Cask Analysis Codes,'' R.E. Glass, Sandia National Laboratories, 1988; ''Standard Thermal Problem Set for the Evaluation of Heat Transfer Codes Used in the Assessment of Transportation Packages, R.E. Glass, et al., Sandia National Laboratories, 1988) used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in ''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks,'' R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem. 6 refs., 5 figs

  19. Status on benchmark testing of CENDL-3

    CERN Document Server

    Liu Ping

    2002-01-01

    CENDL-3, the newest version of China Evaluated Nuclear Data Library has been finished, and distributed for some benchmarks analysis recently. The processing was carried out using the NJOY nuclear data processing code system. The calculations and analysis of benchmarks were done with Monte Carlo code MCNP and reactor lattice code WIMSD5A. The calculated results were compared with the experimental results based on ENDF/B6. In most thermal and fast uranium criticality benchmarks, the calculated k sub e sub f sub f values with CENDL-3 were in good agreements with experimental results. In the plutonium fast cores, the k sub e sub f sub f values were improved significantly with CENDL-3. This is duo to reevaluation of the fission spectrum and elastic angular distributions of sup 2 sup 3 sup 9 Pu and sup 2 sup 4 sup 0 Pu. CENDL-3 underestimated the k sub e sub f sub f values compared with other evaluated data libraries for most spherical or cylindrical assemblies of plutonium or uranium with beryllium

  20. Shielding benchmark test

    International Nuclear Information System (INIS)

    Kawai, Masayoshi

    1984-01-01

    Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)

  1. Fast critical experiments in FCA and their analysis

    International Nuclear Information System (INIS)

    Hirota, Jitsuya

    1984-02-01

    JAERI Fast Critical Facility FCA went critical for the first time in April, 1967. Since then, critical experiments and their analysis were carried out on thirty-five assemblies until march, 1982. This report summarizes many achievements obtained in these fifteen years and points out disagreements observed between the calculation and experiment for further studies. A series of mock-up experiments for Experimental Fast Reactor JOYO, a theoretical and numerical study of adjustment of group constants by using integral data and a development of proton-recoil counter system for fast neutron spectrum measurement won high praise. Studies of Doppler effect of structural materials, effect of fission product accumulation on sodium-void worth, axially heterogeneous core and actinide cross sections attracted world-side attention. Significant contributions were also made to Prototype Fast Breeder Reactor MONJU through the partial mock-up experiments. Disagreements between the calculation and experiment were observed in the following items; reaction rate distribution and reactivity worth of B 4 C absorber in radial blanket, central reactivity worth in core with reflector, plate/pin fuel heterogeneity effect on criticality, sodium-void effect in central core region, Doppler effect of structural materials, core neutron spectrum near large resonances of iron and oxygen, effect of fission product accumulation on sodium-void worth, physics property of heterogeneous core, reactivity change resulted from fuel slumping and so on. Further efforts should be made to solve these disagreements through recalculating the experimental results with newly developed data and methods and carrying out the experiments intended to identify the cause of disagreement. (author)

  2. IAEA coordinated research project (CRP) on 'Analytical and experimental benchmark analyses of accelerator driven systems'

    International Nuclear Information System (INIS)

    Abanades, Alberto; Aliberti, Gerardo; Gohar, Yousry; Talamo, Alberto; Bornos, Victor; Kiyavitskaya, Anna; Carta, Mario; Janczyszyn, Jerzy; Maiorino, Jose; Pyeon, Cheolho; Stanculescu, Alexander; Titarenko, Yury; Westmeier, Wolfram

    2008-01-01

    In December 2005, the International Atomic Energy Agency (IAEA) has started a Coordinated Research Project (CRP) on 'Analytical and Experimental Benchmark Analyses of Accelerator Driven Systems'. The overall objective of the CRP, performed within the framework of the Technical Working Group on Fast Reactors (TWGFR) of IAEA's Nuclear Energy Department, is to increase the capability of interested Member States in developing and applying advanced reactor technologies in the area of long-lived radioactive waste utilization and transmutation. The specific objective of the CRP is to improve the present understanding of the coupling of an external neutron source (e.g. spallation source) with a multiplicative sub-critical core. The participants are performing computational and experimental benchmark analyses using integrated calculation schemes and simulation methods. The CRP aims at integrating some of the planned experimental demonstration projects of the coupling between a sub-critical core and an external neutron source (e.g. YALINA Booster in Belarus, and Kyoto University's Critical Assembly (KUCA)). The objective of these experimental programs is to validate computational methods, obtain high energy nuclear data, characterize the performance of sub-critical assemblies driven by external sources, and to develop and improve techniques for sub-criticality monitoring. The paper summarizes preliminary results obtained to-date for some of the CRP benchmarks. (authors)

  3. Analysis of Fresh Fuel Critical Experiments Appropriate for Burnup Credit Validation

    International Nuclear Information System (INIS)

    DeHart, M.D.

    1995-01-01

    The ANS/ANS-8.1 standard requires that calculational methods used in determining criticality safety limits for applications outside reactors be validated by comparison with appropriate critical experiments. This report provides a detailed description of 34 fresh fuel critical experiments and their analyses using the SCALE-4.2 code system and the 27-group ENDF/B-IV cross-section library. The 34 critical experiments were selected based on geometry, material, and neutron interaction characteristics that are applicable to a transportation cask loaded with pressurized-water-reactor spent fuel. These 34 experiments are a representative subset of a much larger data base of low-enriched uranium and mixed-oxide critical experiments. A statistical approach is described and used to obtain an estimate of the bias and uncertainty in the calculational methods and to predict a confidence limit for a calculated neutron multiplication factor. The SCALE-4.2 results for a superset of approximately 100 criticals are included in uncertainty analyses, but descriptions of the individual criticals are not included

  4. Benchmark test of evaluated nuclear data files for fast reactor neutronics application

    International Nuclear Information System (INIS)

    Chiba, Go; Hazama, Taira; Iwai, Takehiko; Numata, Kazuyuki

    2007-07-01

    A benchmark test of the latest evaluated nuclear data files, JENDL-3.3, JEFF-3.1 and ENDF/B-VII.0, has been carried out for fast reactor neutronics application. For this benchmark test, experimental data obtained at fast critical assemblies and fast power reactors are utilized. In addition to comparing of numerical solutions with the experimental data, we have extracted several cross sections, in which differences between three nuclear data files affect significantly numerical solutions, by virtue of sensitivity analyses. This benchmark test concludes that ENDF/B-VII.0 predicts well the neutronics characteristics of fast neutron systems rather than the other nuclear data files. (author)

  5. The art and science of using routine outcome measurement in mental health benchmarking.

    Science.gov (United States)

    McKay, Roderick; Coombs, Tim; Duerden, David

    2014-02-01

    To report and critique the application of routine outcome measurement data when benchmarking Australian mental health services. The experience of the authors as participants and facilitators of benchmarking activities is augmented by a review of the literature regarding mental health benchmarking in Australia. Although the published literature is limited, in practice, routine outcome measures, in particular the Health of the National Outcomes Scales (HoNOS) family of measures, are used in a variety of benchmarking activities. Use in exploring similarities and differences in consumers between services and the outcomes of care are illustrated. This requires the rigour of science in data management and interpretation, supplemented by the art that comes from clinical experience, a desire to reflect on clinical practice and the flexibility to use incomplete data to explore clinical practice. Routine outcome measurement data can be used in a variety of ways to support mental health benchmarking. With the increasing sophistication of information development in mental health, the opportunity to become involved in benchmarking will continue to increase. The techniques used during benchmarking and the insights gathered may prove useful to support reflection on practice by psychiatrists and other senior mental health clinicians.

  6. Criticality: static profiling for real-time programs

    DEFF Research Database (Denmark)

    Brandner, Florian; Hepp, Stefan; Jordan, Alexander

    2014-01-01

    With the increasing performance demand in real-time systems it becomes more and more important to provide feedback to programmers and software development tools on the performance-relevant code parts of a real-time program. So far, this information was limited to an estimation of the worst....... Experiments using well-established real-time benchmark programs show an interesting distribution of the criticality values, revealing considerable amounts of highly critical as well as uncritical code. The metric thus provides ideal information to programmers and software development tools to optimize...... view covering the entire code base, tools in the spirit of program profiling are required. This work proposes an efficient approach to compute worst-case timing information for all code parts of a program using a complementary metric, called criticality. Every statement of a program is assigned...

  7. Benchmark analysis of SPERT-IV reactor with Monte Carlo code MVP

    International Nuclear Information System (INIS)

    Motalab, M.A.; Mahmood, M.S.; Khan, M.J.H.; Badrun, N.H.; Lyric, Z.I.; Altaf, M.H.

    2014-01-01

    Highlights: • MVP was used for SPERT-IV core modeling. • Neutronics analysis of SPERT-IV reactor was performed. • Calculation performed to estimate critical rod height, excess reactivity. • Neutron flux, time integrated neutron flux and Cd-ratio also calculated. • Calculated values agree with experimental data. - Abstract: The benchmark experiment of the SPERT-IV D-12/25 reactor core has been analyzed with the Monte Carlo code MVP using the cross-section libraries based on JENDL-3.3. The MVP simulation was performed for the clean and cold core. The estimated values of K eff at the experimental critical rod height and the core excess reactivity were within 5% with the experimental data. Thermal neutron flux profiles at different vertical and horizontal positions of the core were also estimated. Cadmium Ratio at different point of the core was also estimated. All estimated results have been compared with the experimental results. Generally good agreement has been found between experimentally determined and the calculated results

  8. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  9. Boiling water reactor turbine trip (TT) benchmark

    International Nuclear Information System (INIS)

    2005-01-01

    In the field of coupled neutronics/thermal-hydraulics computation there is a need to enhance scientific knowledge in order to develop advanced modelling techniques for new nuclear technologies and concepts as well as for current applications. Recently developed 'best-estimate' computer code systems for modelling 3-D coupled neutronics/thermal-hydraulics transients in nuclear cores and for coupling core phenomena and system dynamics (PWR, BWR, VVER) need to be compared against each other and validated against results from experiments. International benchmark studies have been set up for this purpose. The present report is the second in a series of four and summarises the results of the first benchmark exercise, which identifies the key parameters and important issues concerning the thermalhydraulic system modelling of the transient, with specified core average axial power distribution and fission power time transient history. The transient addressed is a turbine trip in a boiling water reactor, involving pressurization events in which the coupling between core phenomena and system dynamics plays an important role. In addition, the data made available from experiments carried out at the Peach Bottom 2 reactor (a GE-designed BWR/4) make the present benchmark particularly valuable. (author)

  10. Choice Complexity, Benchmarks and Costly Information

    NARCIS (Netherlands)

    Harms, Job; Rosenkranz, S.; Sanders, M.W.J.L.

    In this study we investigate how two types of information interventions, providing a benchmark and providing costly information on option ranking, can improve decision-making in complex choices. In our experiment subjects made a series of incentivized choices between four hypothetical financial

  11. Validation of neutron-transport calculations in benchmark facilities for improved damage-fluence predictions

    International Nuclear Information System (INIS)

    Williams, M.L.; Stallmann, F.W.; Maerker, R.E.; Kam, F.B.K.

    1983-01-01

    An accurate determination of damage fluence accumulated by reactor pressure vessels (RPV) as a function of time is essential in order to evaluate the vessel integrity for both pressurized thermal shock (PTS) transients and end-of-life considerations. The desired accuracy for neutron exposure parameters such as displacements per atom or fluence (E > 1 MeV) is of the order of 20 to 30%. However, these types of accuracies can only be obtained realistically by validation of nuclear data and calculational methods in benchmark facilities. The purposes of this paper are to review the needs and requirements for benchmark experiments, to discuss the status of current benchmark experiments, to summarize results and conclusions obtained so far, and to suggest areas where further benchmarking is needed

  12. FENDL neutronics benchmark: Specifications for the calculational neutronics and shielding benchmark

    International Nuclear Information System (INIS)

    Sawan, M.E.

    1994-12-01

    During the IAEA Advisory Group Meeting on ''Improved Evaluations and Integral Data Testing for FENDL'' held in Garching near Munich, Germany in the period 12-16 September 1994, the Working Group II on ''Experimental and Calculational Benchmarks on Fusion Neutronics for ITER'' recommended that a calculational benchmark representative of the ITER design should be developed. This report describes the neutronics and shielding calculational benchmark available for scientists interested in performing analysis for this benchmark. (author)

  13. A suite of benchmark and challenge problems for enhanced geothermal systems

    Energy Technology Data Exchange (ETDEWEB)

    White, Mark; Fu, Pengcheng; McClure, Mark; Danko, George; Elsworth, Derek; Sonnenthal, Eric; Kelkar, Sharad; Podgorney, Robert

    2017-11-06

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of

  14. Plant improvements through the use of benchmarking analysis

    International Nuclear Information System (INIS)

    Messmer, J.R.

    1993-01-01

    As utilities approach the turn of the century, customer and shareholder satisfaction is threatened by rising costs. Environmental compliance expenditures, coupled with low load growth and aging plant assets are forcing utilities to operate existing resources in a more efficient and productive manner. PSI Energy set out in the spring of 1992 on a benchmarking mission to compare four major coal fired plants against others of similar size and makeup, with the goal of finding the best operations in the country. Following extensive analysis of the 'Best in Class' operation, detailed goals and objectives were established for each plant in seven critical areas. Three critical processes requiring rework were identified and required an integrated effort from all plants. The Plant Improvement process has already resulted in higher operation productivity, increased emphasis on planning, and lower costs due to effective material management. While every company seeks improvement, goals are often set in an ambiguous manner. Benchmarking aids in setting realistic goals based on others' actual accomplishments. This paper describes how the utility's short term goals will move them toward being a lower cost producer

  15. Heavy water critical experiments on plutonium lattice

    International Nuclear Information System (INIS)

    Miyawaki, Yoshio; Shiba, Kiminori

    1975-06-01

    This report is the summary of physics study on plutonium lattice made in Heavy Water Critical Experiment Section of PNC. By using Deuterium Critical Assembly, physics study on plutonium lattice has been carried out since 1972. Experiments on following items were performed in a core having 22.5 cm square lattice pitch. (1) Material buckling (2) Lattice parameters (3) Local power distribution factor (4) Gross flux distribution in two region core (5) Control rod worth. Experimental results were compared with theoretical ones calculated by METHUSELAH II code. It is concluded from this study that calculation by METHUSELAH II code has acceptable accuracy in the prediction on plutonium lattice. (author)

  16. Description and exploitation of benchmarks involving 149Sm, a fission product taking part of the burn up credit in spent fuels

    International Nuclear Information System (INIS)

    Anno, J.; Poullot, G.

    1995-01-01

    Up to now, there was no benchmark to validate the Fission Products (FPs) cross sections in criticality safety calculations. The protection and nuclear safety institute (IPSN) has begun an experimental program on 6 FPs ( 103 Rh, 133 Cs, 143 Nd, 149 Sm, 152 Sm, and 155 Gd daughter of 155 Eu) giving alone a decrease of reactivity equal to half the whole FPs in spent fuels (except Xe and I). Here are presented the experiments with the 149 Sm and the results obtained with the APOLLO I-MORET III calculations codes. 11 experiments are carried out in a zircaloy tank of 3.5 1 containing slightly nitric acid solutions of Samarium (96,9% in weight of 149S m) at 0.1048 -0.2148 - 0.6262 g/l concentrations. It was placed in the middle of arrays of UO 2 rods (4.742 % U5 weight %) at square pitch of 13 mm. The underwater height of the rods is the critical parameter. In addition, 7 experiments were performed with the same apparatus with water and boron proving a good experimental representativeness and a good accuracy of the calculations. As the reactivity worth of the Sm tank is between 2000 and 6000 10 -5 , the benchmarks are well representative and the cumulative absorption ratios show that 149 Sm is well qualified under 1 eV. (authors). 8 refs., 7 figs., 6 tabs

  17. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  18. Near-critical density filling of the SF6 fluid cell for the ALI-R-DECLIC experiment in weightlessness

    Science.gov (United States)

    Lecoutre, C.; Marre, S.; Garrabos, Y.; Beysens, D.; Hahn, I.

    2018-05-01

    Analyses of ground-based experiments on near-critical fluids to precisely determine their density can be hampered by several effects, especially the density stratification of the sample, the liquid wetting behavior at the cell walls, and a possible singular curvature of the "rectilinear" diameter of the density coexisting curve. For the latter effect, theoretical efforts have been made to understand the amplitude and shape of the critical hook of the density diameter, which depart from predictions from the so-called ideal lattice-gas model of the uniaxial 3D-Ising universality class. In order to optimize the observation of these subtle effects on the position and shape of the liquid-vapor meniscus in the particular case of SF6, we have designed and filled a cell that is highly symmetrized with respect to any median plane of the total fluid volume. In such a viewed quasi-perfect symmetrical fluid volume, the precise detection of the meniscus position and shape for different orientations of the cell with respect to the Earth's gravity acceleration field becomes a sensitive probe to estimate the cell mean density filling and to test the singular diameter effects. After integration of this cell in the ALI-R insert, we take benefit of the high optical and thermal performances of the DECLIC Engineering Model. Here we present the sensitive imaging method providing the precise ground-based SF6 benchmark data. From these data analysis it is found that the temperature dependence of the meniscus position does not reflect the expected critical hook in the rectilinear density diameter. Therefore the off-density criticality of the cell is accurately estimated, before near future experiments using the same ALI-R insert in the DECLIC facility already on-board the International Space Station.

  19. Benchmarking for On-Scalp MEG Sensors.

    Science.gov (United States)

    Xie, Minshu; Schneiderman, Justin F; Chukharkin, Maxim L; Kalabukhov, Alexei; Riaz, Bushra; Lundqvist, Daniel; Whitmarsh, Stephen; Hamalainen, Matti; Jousmaki, Veikko; Oostenveld, Robert; Winkler, Dag

    2017-06-01

    We present a benchmarking protocol for quantitatively comparing emerging on-scalp magnetoencephalography (MEG) sensor technologies to their counterparts in state-of-the-art MEG systems. As a means of validation, we compare a high-critical-temperature superconducting quantum interference device (high T c SQUID) with the low- T c SQUIDs of an Elekta Neuromag TRIUX system in MEG recordings of auditory and somatosensory evoked fields (SEFs) on one human subject. We measure the expected signal gain for the auditory-evoked fields (deeper sources) and notice some unfamiliar features in the on-scalp sensor-based recordings of SEFs (shallower sources). The experimental results serve as a proof of principle for the benchmarking protocol. This approach is straightforward, general to various on-scalp MEG sensors, and convenient to use on human subjects. The unexpected features in the SEFs suggest on-scalp MEG sensors may reveal information about neuromagnetic sources that is otherwise difficult to extract from state-of-the-art MEG recordings. As the first systematically established on-scalp MEG benchmarking protocol, magnetic sensor developers can employ this method to prove the utility of their technology in MEG recordings. Further exploration of the SEFs with on-scalp MEG sensors may reveal unique information about their sources.

  20. Analyses and results of the OECD/NEA WPNCS EGUNF benchmark phase II. Technical report; Analysen und Ergebnisse zum OECD/NEA WPNCS EGUNF Benchmark Phase II. Technischer Bericht

    Energy Technology Data Exchange (ETDEWEB)

    Hannstein, Volker; Sommer, Fabian

    2017-05-15

    The report summarizes the performed studies and results in the frame of the phase II benchmarks of the expert group of used nuclear fuel (EGUNF) of the working party of nuclear criticality safety (WPNCS) of the nuclear energy agency (NEA) of the organization for economic co-operation and development (OECD). The studies specified within the benchmarks have been realized to the full extent. The scope of the benchmarks was the comparison of a generic BWR fuel element with gadolinium containing fuel rods with several computer codes and cross section libraries of different international working groups and institutions. The used computational model allows the evaluation of the accuracy of fuel rod and their influence of the inventory calculations and the respective influence on BWR burnout credit calculations.

  1. Gas cooled fast reactor benchmarks for JNC and Cea neutronic tools assessment

    International Nuclear Information System (INIS)

    Rimpault, G.; Sugino, K.; Hayashi, H.

    2005-01-01

    In order to verify the adequacy of JNC and Cea computational tools for the definition of GCFR (gas cooled fast reactor) core characteristics, GCFR neutronic benchmarks have been performed. The benchmarks have been carried out on two different cores: 1) a conventional Gas-Cooled fast Reactor (EGCR) core with pin-type fuel, and 2) an innovative He-cooled Coated-Particle Fuel (CPF) core. Core characteristics being studied include: -) Criticality (Effective multiplication factor or K-effective), -) Instantaneous breeding gain (BG), -) Core Doppler effect, and -) Coolant depressurization reactivity. K-effective and coolant depressurization reactivity at EOEC (End Of Equilibrium Cycle) state were calculated since these values are the most critical characteristics in the core design. In order to check the influence due to the difference of depletion calculation systems, a simple depletion calculation benchmark was performed. Values such as: -) burnup reactivity loss, -) mass balance of heavy metals and fission products (FP) were calculated. Results of the core design characteristics calculated by both JNC and Cea sides agree quite satisfactorily in terms of core conceptual design study. Potential features for improving the GCFR computational tools have been discovered during the course of this benchmark such as the way to calculate accurately the breeding gain. Different ways to improve the accuracy of the calculations have also been identified. In particular, investigation on nuclear data for steel is important for EGCR and for lumped fission products in both cores. The outcome of this benchmark is already satisfactory and will help to design more precisely GCFR cores. (authors)

  2. Benchmarking in the Netherlands

    International Nuclear Information System (INIS)

    1999-01-01

    In two articles an overview is given of the activities in the Dutch industry and energy sector with respect to benchmarking. In benchmarking operational processes of different competitive businesses are compared to improve your own performance. Benchmark covenants for energy efficiency between the Dutch government and industrial sectors contribute to a growth of the number of benchmark surveys in the energy intensive industry in the Netherlands. However, some doubt the effectiveness of the benchmark studies

  3. Compilation of MCNP data library based on JENDL-3T and test through analysis of benchmark experiment

    International Nuclear Information System (INIS)

    Sakurai, K.; Sasamoto, N.; Kosako, K.; Ishikawa, T.; Sato, O.; Oyama, Y.; Narita, H.; Maekawa, H.; Ueki, K.

    1989-01-01

    Based on an evaluated nuclear data library JENDL-3T, a temporary version of JENDL-3, a pointwise neutron cross section library for MCNP code is compiled which involves 39 nuclides from H-1 to Am-241 which are important for shielding calculations. Compilation is performed with the code system which consists of the nuclear data processing code NJOY-83 and library compilation code MACROS. Validity of the code system and reliability of the library are certified by analysing benchmark experiments. (author)

  4. Benchmark validation by means of pulsed sphere experiment at OKTAVIAN

    Energy Technology Data Exchange (ETDEWEB)

    Ichihara, Chihiro [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst.; Hayashi, S.A.; Kimura, Itsuro; Yamamoto, Junji; Takahashi, Akito

    1997-03-01

    The new version of Japanese nuclear data library JENDL-3.2 has recently been released. JENDL Fusion File which adopted DDX representations for secondary neutrons was also improved with the new evaluation method. On the other hand, FENDL nuclear data project to compile nuclear data library for fusion related research has been conducted partly under auspices of International Atomic Energy Agency (IAEA). The first version FENDL-1 consists of JENDL-3.1, ENDF/B-VI, BROND-2 and EFF-1 and has been released in 1995. The work for the second version FENDL-2 is now ongoing. The Bench mark validation of the nuclear data libraries have been performed to help selecting the candidate for the FENDL-2. The benchmark experiment have been conducted at OKTAVIAN of Osaka university. The sample spheres were constructed by filling the spherical shells with sample. The leakage neutron spectra from sphere piles were measured with a time-of-flight method. The measured spectra were compared with the theoretical calculation using MCNP 4A and the processed libraries from JENDL-3.1, JENDL-3.2, JENDL Fusion File, and FENDL-1. JENDL Fusion File and JENDL-3.2 gave almost the same prediction for the experiment. And both prediction are almost satisfying for Li, Cr, Mn, Cu, Zr, Nb and Mo, whereas for Al, LiF, CF2, Si, Ti, Co and W there is some discrepancy. However, they gave better prediction than the calculations using the library from FENDL-1, except for W. (author)

  5. Critical Experiments With Aqueous Solutions of 233UO2(NO3)2

    International Nuclear Information System (INIS)

    Thomas, J.T.

    2001-01-01

    This report provides the critical experimenter's interpretations and descriptions of informal critical experiment logbook notes and associated information (e.g., experimental equipment designs/sketches, chemical and isotopic analyses, etc.) for the purpose of formally documenting the results of critical experiments performed in the late 1960s at the Oak Ridge Critical Experiments Facility. The experiments were conducted with aqueous solutions of 97.6 wt % 233 U uranyl nitrate having uranium densities varying between about 346 g U/l and 45 g U/l. Criticality was achieved with single simple units (e.g., cylinders and spheres) and with spaced subcritical simple cylindrical units arranged in unreflected, water-reflected, and polyethylene reflected critical arrays

  6. Validation of KENO V.a: Comparison with critical experiments

    International Nuclear Information System (INIS)

    Jordan, W.C.; Landers, N.F.; Petrie, L.M.

    1986-12-01

    Section 1 of this report documents the validation of KENO V.a against 258 critical experiments. Experiments considered were primarily high or low enriched uranium systems. The results indicate that the KENO V.a Monte Carlo Criticality Program accurately calculates a broad range of critical experiments. A substantial number of the calculations showed a positive or negative bias in excess of 1 1/2% in k-effective (k/sub eff/). Classes of criticals which show a bias include 3% enriched green blocks, highly enriched uranyl fluoride slab arrays, and highly enriched uranyl nitrate arrays. If these biases are properly taken into account, the KENO V.a code can be used with confidence for the design and criticality safety analysis of uranium-containing systems. Sections 2 of this report documents the results of investigation into the cause of the bias observed in Sect. 1. The results of this study indicate that the bias seen in Sect. 1 is caused by code bias, cross-section bias, reporting bias, and modeling bias. There is evidence that many of the experiments used in this validation and in previous validations are not adequately documented. The uncertainty in the experimental parameters overshadows bias caused by the code and cross sections and prohibits code validation to better than about 1% in k/sub eff/. 48 refs., 19 figs., 19 tabs

  7. WLUP benchmarks

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2002-01-01

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  8. Benchmark physics experiment of metallic-fueled LMFBR at FCA. 2

    International Nuclear Information System (INIS)

    Iijima, Susumu; Oigawa, Hiroyuki; Ohno, Akio; Sakurai, Takeshi; Nemoto, Tatsuo; Osugi, Toshitaka; Satoh, Kunio; Hayasaka, Katsuhisa; Bando, Masaru.

    1993-10-01

    An availability of data and method for a design of metallic-fueled LMFBR is examined by using the experiment results of FCA assembly XVI-1. Experiment included criticality and reactivity coefficients such as Doppler, sodium void, fuel shifting and fuel expansion. Reaction rate ratios, sample worth and control rod worth were also measured. Analysis was made by using three-dimensional diffusion calculations and JENDL-2 cross sections. Predictions of assembly XVI-1 reactor physics parameters agree reasonably well with the measured values, but for some reactivity coefficients such as Doppler, large zone sodium void and fuel shifting further improvement of calculation method was need. (author)

  9. Benchmarking in Czech Higher Education: The Case of Schools of Economics

    Science.gov (United States)

    Placek, Michal; Ochrana, František; Pucek, Milan

    2015-01-01

    This article describes the use of benchmarking in universities in the Czech Republic and academics' experiences with it. It is based on research conducted among academics from economics schools in Czech public and private universities. The results identified several issues regarding the utilisation and understanding of benchmarking in the Czech…

  10. Core Benchmarks Descriptions

    International Nuclear Information System (INIS)

    Pavlovichev, A.M.

    2001-01-01

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented

  11. Validation of MCNP6.1 for Criticality Safety of Pu-Metal, -Solution, and -Oxide Systems

    Energy Technology Data Exchange (ETDEWEB)

    Kiedrowski, Brian C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Conlin, Jeremy Lloyd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kahler, III, Albert C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kersting, Alyssa R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, Donald K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Walker, Jessie L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-05-13

    Guidance is offered to the Los Alamos National Laboratory Nuclear Criticality Safety division towards developing an Upper Subcritical Limit (USL) for MCNP6.1 calculations with ENDF/B-VII.1 nuclear data for three classes of problems: Pu-metal, -solution, and -oxide systems. A benchmark suite containing 1,086 benchmarks is prepared, and a sensitivity/uncertainty (S/U) method with a generalized linear least squares (GLLS) data adjustment is used to reject outliers, bringing the total to 959 usable benchmarks. For each class of problem, S/U methods are used to select relevant experimental benchmarks, and the calculational margin is computed using extreme value theory. A portion of the margin of sub criticality is defined considering both a detection limit for errors in codes and data and uncertainty/variability in the nuclear data library. The latter employs S/U methods with a GLLS data adjustment to find representative nuclear data covariances constrained by integral experiments, which are then used to compute uncertainties in keff from nuclear data. The USLs for the classes of problems are as follows: Pu metal, 0.980; Pu solutions, 0.973; dry Pu oxides, 0.978; dilute Pu oxide-water mixes, 0.970; and intermediate-spectrum Pu oxide-water mixes, 0.953.

  12. Review of microscopic integral cross section data in fundamental reactor dosimetry benchmark neutron fields

    International Nuclear Information System (INIS)

    Fabry, A.; McElroy, W.N.; Kellogg, L.S.; Lippincott, E.P.; Grundl, J.A.; Gilliam, D.M.; Hansen, G.E.

    1976-01-01

    This paper is intended to review and critically discuss microscopic integral cross section measurement and calculation data for fundamental reactor dosimetry benchmark neutron fields. Specifically the review covers the following fundamental benchmarks: the spontaneous californium-252 fission neutron spectrum standard field; the thermal-neutron induced uranium-235 fission neutron spectrum standard field; the (secondary) intermediate-energy standard neutron field at the center of the Mol-ΣΣ, NISUS, and ITN-ΣΣ facilities; the reference neutron field at the center of the Coupled Fast Reactor Measurement Facility; the reference neutron field at the center of the 10% enriched uranium metal, cylindrical, fast critical; the (primary) Intermediate-Energy Standard Neutron Field

  13. Benchmarking electricity distribution

    Energy Technology Data Exchange (ETDEWEB)

    Watts, K. [Department of Justice and Attorney-General, QLD (Australia)

    1995-12-31

    Benchmarking has been described as a method of continuous improvement that involves an ongoing and systematic evaluation and incorporation of external products, services and processes recognised as representing best practice. It is a management tool similar to total quality management (TQM) and business process re-engineering (BPR), and is best used as part of a total package. This paper discusses benchmarking models and approaches and suggests a few key performance indicators that could be applied to benchmarking electricity distribution utilities. Some recent benchmarking studies are used as examples and briefly discussed. It is concluded that benchmarking is a strong tool to be added to the range of techniques that can be used by electricity distribution utilities and other organizations in search of continuous improvement, and that there is now a high level of interest in Australia. Benchmarking represents an opportunity for organizations to approach learning from others in a disciplined and highly productive way, which will complement the other micro-economic reforms being implemented in Australia. (author). 26 refs.

  14. Development and benchmark of high energy continuous-energy neutron cross Section library HENDL-ADS/MC

    International Nuclear Information System (INIS)

    Chen Chong; Wang Minghuang; Zou Jun; Xu Dezheng; Zeng Qin

    2012-01-01

    The ADS (accelerator driven sub-critical system) has great energy spans, complex energy spectrum structures and strong physical effects. Hence, the existing nuclear data libraries can't fully meet the needs of nuclear analysis in ADS. In order to do nuclear analysis for ADS system, a point-wise data library HENDL-ADS/MC (hybrid evaluated nuclear data library) was produced by FDS team. Meanwhile, to test the availability and reliability of the HENDL-ADS/MC data library, a series of shielding and critical safety benchmarks were performed. To validate and qualify the reliability of the high-energy cross section for HENDL-ADS/MC library further, a series of high neutronics integral experiments have been performed. The testing results confirm the accuracy and reliability of HENDL-ADS/MC. (authors)

  15. KUCA critical experiments using MEU fuel (II)

    Energy Technology Data Exchange (ETDEWEB)

    Kanda, Keiji; Hayashi, Masatoshi; Shiroya, Seiji; Kobayashi, Keiji; Fukui, Hiroshi; Mishima, Kaichiro; Shibata, Toshikazu [Research Reactor Institute, Kyoto University, Kumatori-cho, Sennan-gun, Osaka (Japan)

    1983-09-01

    Due to mutual concerns in the USA and Japan about the proliferation potential of highly-enriched uranium (HEU), a joint study program I was initiated between Argonne National Laboratory (ANL and Kyoto University Research Reactor Institute (KURRI) in 1978. In accordance with the reduced enrichment for research and test reactor (RERTR) program, the alternatives were studied for reducing the enrichment of the fuel to be used in the Kyoto University High Flux Reactor (KUHFR). The KUHFR has a distinct feature in its core configuration it is a coupled-core. Each annular shaped core is light-water-moderated and placed within a heavy water reflector with a certain distance between them. The phase A reports of the joint ANL-KURRI program independently prepared by two laboratories in February 1979, 3,4 concluded that the use of medium-enrichment uranium (MEU, 45%) in the KUHFR is feasible, pending results of the critical experiments in the Kyoto University Critical Assembly (KUCA) 5 and of the burnup test in the Oak Ridge Research Reactor 6 (ORR). An application of safety review (Reactor Installation License) for MEU fuel to be used in the KUCA was submitted to the Japanese Government in March 1980, and a license was issued in August 1980. Subsequently, the application for 'Authorization before Construction' was submitted and was authorized in September 1980. Fabrication of MEU fuel-elements for the KUCA experiments by CERCA in France was started in September 1980, and was completed in March 1981. The critical experiments in the KUCA with MEU fuel were started on a single-core in May 1981 as a first step. The first critical state of the core using MEU fuel was achieved at 312 p.m. in May 12, 1981. After that, the reactivity effects of the outer side-plates containing boron burnable poison were measured. At Munich Meeting in Sept., 1981, we presented a paper on critical mass and reactivity of burnable poison in the MEU core. Since then we carried out the following experiments

  16. KUCA critical experiments using MEU fuel (II)

    International Nuclear Information System (INIS)

    Kanda, Keiji; Hayashi, Masatoshi; Shiroya, Seiji; Kobayashi, Keiji; Fukui, Hiroshi; Mishima, Kaichiro; Shibata, Toshikazu

    1983-01-01

    Due to mutual concerns in the USA and Japan about the proliferation potential of highly-enriched uranium (HEU), a joint study program I was initiated between Argonne National Laboratory (ANL and Kyoto University Research Reactor Institute (KURRI) in 1978. In accordance with the reduced enrichment for research and test reactor (RERTR) program, the alternatives were studied for reducing the enrichment of the fuel to be used in the Kyoto University High Flux Reactor (KUHFR). The KUHFR has a distinct feature in its core configuration it is a coupled-core. Each annular shaped core is light-water-moderated and placed within a heavy water reflector with a certain distance between them. The phase A reports of the joint ANL-KURRI program independently prepared by two laboratories in February 1979, 3,4 concluded that the use of medium-enrichment uranium (MEU, 45%) in the KUHFR is feasible, pending results of the critical experiments in the Kyoto University Critical Assembly (KUCA) 5 and of the burnup test in the Oak Ridge Research Reactor 6 (ORR). An application of safety review (Reactor Installation License) for MEU fuel to be used in the KUCA was submitted to the Japanese Government in March 1980, and a license was issued in August 1980. Subsequently, the application for 'Authorization before Construction' was submitted and was authorized in September 1980. Fabrication of MEU fuel-elements for the KUCA experiments by CERCA in France was started in September 1980, and was completed in March 1981. The critical experiments in the KUCA with MEU fuel were started on a single-core in May 1981 as a first step. The first critical state of the core using MEU fuel was achieved at 312 p.m. in May 12, 1981. After that, the reactivity effects of the outer side-plates containing boron burnable poison were measured. At Munich Meeting in Sept., 1981, we presented a paper on critical mass and reactivity of burnable poison in the MEU core. Since then we carried out the following experiments

  17. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  18. Qinshan CANDU NPP outage performance improvement through benchmarking

    International Nuclear Information System (INIS)

    Jiang Fuming

    2005-01-01

    With the increasingly fierce competition in the deregulated Energy Market, the optimization of outage duration has become one of the focal points for the Nuclear Power Plant owners around the world. People are seeking various ways to shorten the outage duration of NPP. Great efforts have been made in the Light Water Reactor (LWR) family with the concept of benchmarking and evaluation, which great reduced the outage duration and improved outage performance. The average capacity factor of LWRs has been greatly improved over the last three decades, which now is close to 90%. CANDU (Pressurized Heavy Water Reactor) stations, with its unique feature of on power refueling, of nuclear fuel remaining in the reactor all through the planned outage, have given raise to more stringent safety requirements during planned outage. In addition, the above feature gives more variations to the critical path of planned outage in different station. In order to benchmarking again the best practices in the CANDU stations, Third Qinshan Nuclear Power Company (TQNPC) have initiated the benchmarking program among the CANDU stations aiming to standardize the outage maintenance windows and optimize the outage duration. The initial benchmarking has resulted the optimization of outage duration in Qinshan CANDU NPP and the formulation of its first long-term outage plan. This paper describes the benchmarking works that have been proven to be useful for optimizing outage duration in Qinshan CANDU NPP, and the vision of further optimize the duration with joint effort from the CANDU community. (authors)

  19. Validation of the code ETOBOX/BOXER for UO2 LWR lattices based on the experiments TRX, BAPL-UO2 and other critical experiments

    International Nuclear Information System (INIS)

    Paratte, J.M.

    1985-07-01

    The EIR codes system for LWR arrays is based on cross sections taken out of ENDF/B-4 and ENDF/B-5 by the code ETOBOX. The calculation method for the arrays (code BOXER) and the cross sections as well were applied to the CSEWG benchmark experiments TRX-1 to 4 and BAPL-UO/sub 2/-1 to 3. The results are compared to the measured values and to some calculations of other institutions as well. This demonstrates that the deviations of the parameters calculated by BOXER are typical for the cross sections used. A large number of critical experiments were calculated using the measured material bucklings in order to bring to light possible trends in the calculation of the multiplication factor k/sub eff/. First it came out that the error bounds of B/sub m//sup 2/ evalu-ated in the measurements are often optimistic. Two-dimensional calculations improved the results of the cell calculations. With a mean scattering of 4 to 5 mk in the normal arrays, the multiplication factors calculated by BOXER are satisfactory. However one has to take into account a slight trend of k/sub eff/ to grow with the moderator to fuel ratio and the enrichment. (author)

  20. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  1. Criticality experiments of the years 1981 and 1982

    International Nuclear Information System (INIS)

    Heinicke, W.; Tischer, A.

    1983-01-01

    This report presents a collection of published criticality experiments made in 1981 and 1982 and thus continues the collection of experimental data of this type commenced with the GRS report A-644 of November 1981, which covers criticality experiments of the years 1975 to 1980. The report gives the main data of about 30 publications which, just a those cited in the GRS report, can be retrieved from the improved KRITEXP data base using 14 index terms, and printed out at random sequence. The collection of experimental data is of particular value with regard to the licensing of all installations forming part of the nuclear fuel cycle, which is subject to the atomic energy law and requires the verification of computed criticality analyses by experimental data. (orig.) [de

  2. Scale-4 analysis of pressurized water reactor critical configurations: Volume 5, North Anna Unit 1 Cycle 5

    International Nuclear Information System (INIS)

    Bowman, S.M.; Suto, T.

    1996-10-01

    ANSI/ANS 8.1 requires that calculational methods for away-from- reactor (AFR) criticality safety analyses be validated against experiment. This report summarizes part of the ongoing effort to benchmark AFR criticality analysis methods using selected critical configurations from commercial PWRs. Codes and data in the SCALE-4 code system were used. This volume documents the SCALE system analysis of one reactor critical configuration for North Anna Unit 1 Cycle 5. The KENO V.a criticality calculations for the North Anna 1 Cycle 5 beginning-of-cycle model yielded a value for k eff of 1. 0040±0.0005

  3. Critical and sub-critical experiments on U-BeO lattices; Experiences critiques et sous-critiques sur reseaux U-BeO

    Energy Technology Data Exchange (ETDEWEB)

    Benoist, P.; Gourdon, Ch.; Martelly, J.; Sagot, M.; Wanner, G. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires; Deniz, V.; Joshi, B.V.; Sahai, K. [Atomic Energy Establishment Trombay (India)

    1958-07-01

    Sub-critical experiments have allowed us to measure the material buckling of uranium natural oxide of beryllium lattices with a grid of 15 cm, and made up of uranium bars measuring 2.60 - 2.92 - 3.56 and 4.40 cm of diameter. A critical experiment has then been conducted with hollow 1.35 per cent enriched uranium bars. A study of U-BeO 18.03 cm grid lattices is at present being conducted. (author)Fren. [French] Nous avons mesure par des experiences sous-critiques le laplacien matiere de reseaux uranium naturel-oxyde de beryllium, dont la maille carree a un pas de 15 cm, realises avec des barreaux d'uranium de diametres 2,60 - 2,92 - 3,56 - 4,40 cm. Une experience critique a ete faite ensuite avec des barres creuses d'uranium enrichi a 1,35 pour cent; l'etude des reseaux U-BeO de pas 18,03 cm est actuellement en cours. (auteur)

  4. Critical and sub-critical experiments on U-BeO lattices; Experiences critiques et sous-critiques sur reseaux U-BeO

    Energy Technology Data Exchange (ETDEWEB)

    Benoist, P; Gourdon, Ch; Martelly, J; Sagot, M; Wanner, G [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires; Deniz, V; Joshi, B V; Sahai, K [Atomic Energy Establishment Trombay (India)

    1958-07-01

    Sub-critical experiments have allowed us to measure the material buckling of uranium natural oxide of beryllium lattices with a grid of 15 cm, and made up of uranium bars measuring 2.60 - 2.92 - 3.56 and 4.40 cm of diameter. A critical experiment has then been conducted with hollow 1.35 per cent enriched uranium bars. A study of U-BeO 18.03 cm grid lattices is at present being conducted. (author)Fren. [French] Nous avons mesure par des experiences sous-critiques le laplacien matiere de reseaux uranium naturel-oxyde de beryllium, dont la maille carree a un pas de 15 cm, realises avec des barreaux d'uranium de diametres 2,60 - 2,92 - 3,56 - 4,40 cm. Une experience critique a ete faite ensuite avec des barres creuses d'uranium enrichi a 1,35 pour cent; l'etude des reseaux U-BeO de pas 18,03 cm est actuellement en cours. (auteur)

  5. International Reactor Physics Experiment Evaluation (IRPhE) Project

    International Nuclear Information System (INIS)

    2013-01-01

    The International Reactor Physics Experiment Evaluation (IRPhE) Project aims to provide the nuclear community with qualified benchmark data sets by collecting reactor physics experimental data from nuclear facilities, worldwide. More specifically the objectives of the expert group are as follows: - maintaining an inventory of the experiments that have been carried out and documented; - archiving the primary documents and data released in computer-readable form; - promoting the use of the format and methods developed and seek to have them adopted as a standard. For those experiments where interest and priority is expressed by member countries or working parties and executive groups within the NEA provide guidance or co-ordination in: - compiling experiments into a standard international agreed format; - verifying the data, to the extent possible, by reviewing original and subsequently revised documentation, and by consulting with the experimenters or individuals who are familiar with the experimenters or the experimental facility; - analysing and interpreting the experiments with current state-of-the-art methods; - publishing electronically the benchmark evaluations. The expert group will: - identify gaps in data and provide guidance on priorities for future experiments; - involve the young generation (Masters and PhD students and young researchers) to find an effective way of transferring know-how in experimental techniques and analysis methods; - provide a tool for improved exploitation of completed experiments for Generation IV reactors; - coordinate closely its work with other NSC experimental work groups in particular the International Criticality Safety Benchmark Evaluation Project (ICSBEP), the Shielding Integral Benchmark Experiment Data Base (SINBAD) and others, e.g. knowledge preservation in fast reactors of the IAEA, the ANS Joint Benchmark Activities; - keep a close link with the working parties on scientific issues of reactor systems (WPRS), the expert

  6. Issues in benchmarking human reliability analysis methods: A literature review

    International Nuclear Information System (INIS)

    Boring, Ronald L.; Hendrickson, Stacey M.L.; Forester, John A.; Tran, Tuan Q.; Lois, Erasmia

    2010-01-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies is presented in order to aid in the design of future HRA benchmarking endeavors.

  7. Benchmarking of the FENDL-3 Neutron Cross-Section Data Library for Fusion Applications

    International Nuclear Information System (INIS)

    Fischer, U.; Kondo, K.; Angelone, M.; Batistoni, P.; Villari, R.; Bohm, T.; Sawan, M.; Walker, B.; Konno, C.

    2014-03-01

    This report summarizes the benchmark analyses performed in a joint effort of ENEA (Italy), JAEA (Japan), KIT (Germany), and the University of Wisconsin (USA) with the objective to test and qualify the neutron induced general purpose FENDL-3.0 data library for fusion applications. The benchmark approach consisted of two major steps including the analysis of a simple ITER-like computational benchmark, and a series of analyses of benchmark experiments conducted previously at the 14 MeV neutron generator facilities at ENEA Frascati, Italy (FNG) and JAEA, Tokai-mura, Japan (FNS). The computational benchmark revealed a modest increase of the neutron flux levels in the deep penetration regions and a substantial increase of the gas production in steel components. The comparison to experimental results showed good agreement with no substantial differences between FENDL-3.0 and FENDL-2.1 for most of the responses analysed. There is a slight trend, however, for an increase of the fast neutron flux in the shielding experiment and a decrease in the breeder mock-up experiments. The photon flux spectra measured in the bulk shield and the tungsten experiments are significantly better reproduced with FENDL-3.0 data. In general, FENDL-3, as compared to FENDL-2.1, shows an improved performance for fusion neutronics applications. It is thus recommended to ITER to replace FENDL-2.1 as reference data library for neutronics calculation by FENDL-3.0. (author)

  8. Bibliography for nuclear criticality accident experience, alarm systems, and emergency management

    International Nuclear Information System (INIS)

    Putman, V.L.

    1995-09-01

    The characteristics, detection, and emergency management of nuclear criticality accidents outside reactors has been an important component of criticality safety for as long as the need for this specialized safety discipline has been recognized. The general interest and importance of such topics receives special emphasis because of the potentially lethal, albeit highly localized, effects of criticality accidents and because of heightened public and regulatory concerns for any undesirable event in nuclear and radiological fields. This bibliography lists references which are potentially applicable to or interesting for criticality alarm, detection, and warning systems; criticality accident emergency management; and their associated programs. The lists are annotated to assist bibliography users in identifying applicable: industry and regulatory guidance and requirements, with historical development information and comments; criticality accident characteristics, consequences, experiences, and responses; hazard-, risk-, or safety-analysis criteria; CAS design and qualification criteria; CAS calibration, maintenance, repair, and testing criteria; experiences of CAS designers and maintainers; criticality accident emergency management (planning, preparedness, response, and recovery) requirements and guidance; criticality accident emergency management experience, plans, and techniques; methods and tools for analysis; and additional bibliographies

  9. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 5C. Experience data. Working material

    International Nuclear Information System (INIS)

    1999-01-01

    In August 1991, following the SMiRT-11 Conference in Tokyo, a Technical Committee Meeting was held on the 'Seismic safety issues relating to existing NPPs'. The Proceedings of this TCM was subsequently compiled in an IAEA Working Material. One of the main recommendations of this TCM, called for the harmonization of criteria and methods used in Member States in seismic reassessment and upgrading of existing NPPs. Twenty four institutions from thirteen countries participated in the CRP named 'Benchmark study for the seismic analysis and testing of WWER type NPPs'. Two types of WWER reactors (WWER-1000 and WWER-440/213) selected for benchmarking. Kozloduy NPP Units 5/6 and Paks NPP represented these respectively as prototypes. Consistent with the recommendations of the TCM and the working paper prepared by the subsequent Consultants' Meeting, the focal activity of the CRP was the benchmarking exercises. A similar methodology was followed both for Paks NPP and Kozloduy NPP Unit 5. Firstly, the NPP (mainly the reactor building) was tested using a blast loading generated by a series of explosions from buried TNT charges. Records from this test were obtained at several free field locations (both downhole and surface), foundation mat, various elevations of structures as well as some tanks and the stack. Then the benchmark participants were provided with structural drawings, soil data and the free field record of the blast experiment. Their task was to make a blind prediction of the response at preselected locations. The analytical results from these participants were then compared with the results from the test. Although the benchmarking exercises constituted the focus of the CRP, there were many other interesting problems related to the seismic safety of WWER type NPPs which were addressed by the participants. These involved generic studies, i.e. codes and standards used in original WWER designs and their comparison with current international practice; seismic analysis

  10. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  11. Characterization of the CALIBAN Critical Assembly Neutron Spectra using Several Adjustment Methods Based on Activation Foils Measurement

    OpenAIRE

    Casoli Pierre; Grégoire Gilles; Rousseau Guillaume; Jacquet Xavier; Authier Nicolas

    2016-01-01

    CALIBAN is a metallic critical assembly managed by the Criticality, Neutron Science and Measurement Department located on the French CEA Center of Valduc. The reactor is extensively used for benchmark experiments dedicated to the evaluation of nuclear data, for electronic hardening or to study the effect of the neutrons on various materials. Therefore CALIBAN irradiation characteristics and especially its central cavity neutron spectrum have to be very accurately evaluated. In order to streng...

  12. Attachment Theory in Supervision: A Critical Incident Experience

    Science.gov (United States)

    Pistole, M. Carole; Fitch, Jenelle C.

    2008-01-01

    Critical incident experiences are a powerful source of counselor development (T. M. Skovholt & P. R. McCarthy, 1988a, 1988b) and are relevant to attachment issues. An attachment theory perspective of supervision is presented and applied to a critical incident case scenario. By focusing on the behavioral systems (i.e., attachment, caregiving, and…

  13. The Pajarito Site operating procedures for the Los Alamos Critical Experiments Facility

    International Nuclear Information System (INIS)

    Malenfant, R.E.

    1991-12-01

    Operating procedures consistent with DOE Order 5480.6, and the American National Standard Safety Guide for the Performance of Critical Experiments are defined for the Los Alamos Critical Experiments Facility (LACEF) of the Los Alamos National Laboratory. These operating procedures supersede and update those previously published in 1983 and apply to any criticality experiment performed at the facility. 11 refs

  14. Validation study of the reactor physics lattice transport code WIMSD-5B by TRX and BAPL critical experiments of light water reactors

    International Nuclear Information System (INIS)

    Khan, M.J.H.; Alam, A.B.M.K.; Ahsan, M.H.; Mamun, K.A.A.; Islam, S.M.A.

    2015-01-01

    Highlights: • To validate the reactor physics lattice code WIMSD-5B by this analysis. • To model TRX and BAPL critical experiments using WIMSD-5B. • To compare the calculated results with experiment and MCNP results. • To rely on WIMSD-5B code for TRIGA calculations. - Abstract: The aim of this analysis is to validate the reactor physics lattice transport code WIMSD-5B by TRX (thermal reactor-one region lattice) and BAPL (Bettis Atomic Power Laboratory-one region lattice) critical experiments of light water reactors for neutronics analysis of 3 MW TRIGA Mark-II research reactor at AERE, Dhaka, Bangladesh. This analysis is achieved through the analysis of integral parameters of five light water reactor critical experiments TRX-1, TRX-2, BAPL-UO 2 -1, BAPL-UO 2 -2 and BAPL-UO 2 -3 based on evaluated nuclear data libraries JEFF-3.1 and ENDF/B-VII.1. In integral measurements, these experiments are considered as standard benchmark lattices for validating the reactor physics lattice transport code WIMSD-5B as well as evaluated nuclear data libraries. The integral parameters of the said critical experiments are calculated using the reactor physics lattice transport code WIMSD-5B. The calculated integral parameters are compared to the measured values as well as the earlier published MCNP results based on the Chinese evaluated nuclear data library CENDL-3.0 for assessment of deterministic calculation. It was found that the calculated integral parameters give mostly reasonable and globally consistent results with the experiment and the MCNP results. Besides, the group constants in WIMS format for the isotopes U-235 and U-238 between two data files have been compared using WIMS library utility code WILLIE and it was found that the group constants are well consistent with each other. Therefore, this analysis reveals the validation study of the reactor physics lattice transport code WIMSD-5B based on JEFF-3.1 and ENDF/B-VII.1 libraries and can also be essential to

  15. Static benchmarking of the NESTLE advanced nodal code

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1997-01-01

    Results from the NESTLE advanced nodal code are presented for multidimensional numerical benchmarks representing four different types of reactors, and predictions from NESTLE are compared with measured data from pressurized water reactors (PWRs). The numerical benchmarks include cases representative of PWRs, boiling water reactors (BWRs), CANDU heavy water reactors (HWRs), and high-temperature gas-cooled reactors (HTGRs). The measured PWR data include critical soluble boron concentrations and isothermal temperature coefficients of reactivity. The results demonstrate that NESTLE correctly solves the multigroup diffusion equations for both Cartesian and hexagonal geometries, that it reliably calculates k eff and reactivity coefficients for PWRs, and that--subsequent to the incorporation of additional thermal-hydraulic models--it will be able to perform accurate calculations for the corresponding parameters in BWRs, HWRs, and HTGRs as well

  16. Benchmarking school nursing practice: the North West Regional Benchmarking Group

    OpenAIRE

    Littler, Nadine; Mullen, Margaret; Beckett, Helen; Freshney, Alice; Pinder, Lynn

    2016-01-01

    It is essential that the quality of care is reviewed regularly through robust processes such as benchmarking to ensure all outcomes and resources are evidence-based so that children and young people’s needs are met effectively. This article provides an example of the use of benchmarking in school nursing practice. Benchmarking has been defined as a process for finding, adapting and applying best practices (Camp, 1994). This concept was first adopted in the 1970s ‘from industry where it was us...

  17. Criticality safety analysis for mockup facility

    International Nuclear Information System (INIS)

    Shin, Young Joon; Shin, Hee Sung; Kim, Ik Soo; Oh, Seung Chul; Ro, Seung Gy; Bae, Kang Mok

    2000-03-01

    Benchmark calculations for SCALE4.4 CSAS6 module have been performed for 31 UO 2 fuel, 15MOX fuel and 10 metal material criticality experiments and then calculation biases of the SCALE 4.4 CSAS6 module have been revealed to be 0.00982, 0.00579 and 0.02347, respectively. When CSAS6 is applied to the criticality safety analysis for the mockup facility in which several kinds of nuclear material components are included, the calculation bias of CSAS6 is conservatively taken to be 0.02347. With the aid of this benchmarked code system, criticality safety analyses for the mockup facility at normal and hypothetical accidental conditions have been carried out. It appears that the maximum K eff is 0.28356 well below than the critical limit, K eff =0.95 at normal condition. In a hypothetical accidental condition, the maximum K eff is found to be 0.73527 much lower than the subcritical limit. For another hypothetical accidental condition the nuclear material leaks out of container and spread or lump in the floor, it was assumed that the nuclear material is shaped into a slab and water exists in the empty space of the nuclear material. K eff has been calculated as function of slab thickness and the volume ratio of water to nuclear material. The result shows that the K eff increases as the water volume ratio increases. It is also revealed that the K eff reaches to the maximum value when water if filled in the empty space of nuclear material. The maximum K eff value is 0.93960 lower than the subcritical limit

  18. Review of microscopic integral cross section data in fundamental reactor dosimetry benchmark neutron fields

    International Nuclear Information System (INIS)

    Fabry, A.; McElroy, W.N.; Kellogg, L.S.; Lippincott, E.P.; Grundl, J.A.; Gilliam, D.M.; Hansen, G.E.

    1976-10-01

    The paper is intended to review and critically discuss microscopic integral cross section measurement and calculation data for fundamental reactor dosimetry benchmark neutron fields. Specifically the review covers the following fundamental benchmarks: (1) the spontaneous californium-252 fission neutron spectrum standard field; (2) the thermal-neutron induced uranium-235 fission neutron spectrum standard field; (3) the (secondary) intermediate-energy standard neutron field at the center of the Mol-ΣΣ, NISUS, and ITN--ΣΣ facilities; (4) the reference neutron field at the center of the Coupled Fast Reactor Measurement Facility (CFRMF); (5) the reference neutron field at the center of the 10 percent enriched uranium metal, cylindrical, fast critical; and (6) the (primary) Intermediate-Energy Standard Neutron Field

  19. Utilizing benchmark data from the ANL-ZPR diagnostic cores program

    International Nuclear Information System (INIS)

    Schaefer, R. W.; McKnight, R. D.

    2000-01-01

    The support of the criticality safety community is allowing the production of benchmark descriptions of several assemblies from the ZPR Diagnostic Cores Program. The assemblies have high sensitivities to nuclear data for a few isotopes. This can highlight limitations in nuclear data for selected nuclides or in standard methods used to treat these data. The present work extends the use of the simplified model of the U9 benchmark assembly beyond the validation of k eff . Further simplifications have been made to produce a data testing benchmark in the style of the standard CSEWG benchmark specifications. Calculations for this data testing benchmark are compared to results obtained with more detailed models and methods to determine their biases. These biases or corrections factors can then be applied in the use of the less refined methods and models. Data testing results using Versions IV, V, and VI of the ENDF/B nuclear data are presented for k eff , f 28 /f 25 , c 28 /f 25 , and β eff . These limited results demonstrate the importance of studying other integral parameters in addition to k eff in trying to improve nuclear data and methods and the importance of accounting for methods and/or modeling biases when using data testing results to infer the quality of the nuclear data files

  20. Design and Implementation of a Web-Based Reporting and Benchmarking Center for Inpatient Glucometrics

    Science.gov (United States)

    Schnipper, Jeffrey Lawrence; Messler, Jordan; Ramos, Pedro; Kulasa, Kristen; Nolan, Ann; Rogers, Kendall

    2014-01-01

    Background: Insulin is a top source of adverse drug events in the hospital, and glycemic control is a focus of improvement efforts across the country. Yet, the majority of hospitals have no data to gauge their performance on glycemic control, hypoglycemia rates, or hypoglycemic management. Current tools to outsource glucometrics reports are limited in availability or function. Methods: Society of Hospital Medicine (SHM) faculty designed and implemented a web-based data and reporting center that calculates glucometrics on blood glucose data files securely uploaded by users. Unit labels, care type (critical care, non–critical care), and unit type (eg, medical, surgical, mixed, pediatrics) are defined on upload allowing for robust, flexible reporting. Reports for any date range, care type, unit type, or any combination of units are available on demand for review or downloading into a variety of file formats. Four reports with supporting graphics depict glycemic control, hypoglycemia, and hypoglycemia management by patient day or patient stay. Benchmarking and performance ranking reports are generated periodically for all hospitals in the database. Results: In all, 76 hospitals have uploaded at least 12 months of data for non–critical care areas and 67 sites have uploaded critical care data. Critical care benchmarking reveals wide variability in performance. Some hospitals achieve top quartile performance in both glycemic control and hypoglycemia parameters. Conclusions: This new web-based glucometrics data and reporting tool allows hospitals to track their performance with a flexible reporting system, and provides them with external benchmarking. Tools like this help to establish standardized glucometrics and performance standards. PMID:24876426

  1. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Concrete reflected cylinders of highly enriched solutions of uranyl nitrate ICSBEP Benchmark: A re-evaluation by means of MCNPX using ENDF/B-VI cross section library

    International Nuclear Information System (INIS)

    Cruzate, J.A.; Carelli, J.L.

    2011-01-01

    This work presents a theoretical re-evaluation of a set of original experiments included in the 2009 issue of the International Handbook of Evaluated Criticality Safety Benchmark Experiments, as “Concrete Reflected Cylinders of Highly Enriched Solutions of Uranyl Nitrate” (identification number: HEU-SOL-THERM- 002) [4]. The present evaluation has been made according to benchmark specifications [4], and added data taken out of the original published report [3], but applying a different approach, resulting in a more realistic calculation model. In addition, calculations have been made using the latest version of MCNPX Monte Carlo code, combined with an updated set of cross section data, the continuous-energy ENDF/B-VI library. This has resulted in a comprehensive model for the given experimental situation. Uncertainties analysis has been made based on the evaluation of experimental data presented in the HEU-SOLTHERM-002 report. Resulting calculations with the present improved physical model have been able to reproduce the criticality of configurations within 0.5%, in good agreement with experimental data. Results obtained in the analysis of uncertainties are in general agreement with those at HEU-SOL-THERM-002 benchmark document. Qualitative results from analyses made in the present work can be extended to similar fissile systems: well moderated units of 235 U solutions, reflected with concrete from all directions. Results have confirmed that neutron absorbers, even as impurities, must be taken into account in calculations if at least approximate proportions were known. (authors)

  3. Dose Rate Experiment at JET for Benchmarking the Calculation Direct One Step Method

    International Nuclear Information System (INIS)

    Angelone, M.; Petrizzi, L.; Pillon, M.; Villari, R.; Popovichev, S.

    2006-01-01

    Neutrons produced by D-D and D-T plasmas induce the activation of tokamak materials and of components. The development of reliable methods to assess dose rates is a key issue for maintenance and operating nuclear machines, in normal and off-normal conditions. In the frame of the EFDA Fusion Technology work programme, a computational tool based upon MCNP Monte Carlo code has been developed to predict the dose rate after shutdown: it is called Direct One Step Method (D1S). The D1S is an innovative approach in which the decay gammas are coupled to the neutrons as in the prompt case and they are transported in one single step in the same run. Benchmarking of this new tool with experimental data taken in a complex geometry like that of a tokamak is a fundamental step to test the reliability of the D1S method. A dedicated benchmark experiment was proposed for the 2005-2006 experimental campaign of JET. Two irradiation positions have been selected for the benchmark: one inner position inside the vessel, not far from the plasma, called the 2 upper irradiation end (IE2), where neutron fluence is relatively high. The second position is just outside a vertical port in an external position (EX). Here the neutron flux is lower and the dose rate to be measured is not very far from the residual background. Passive detectors are used for in-vessel measurements: the high sensitivity Thermo Luminescent Dosimeters (TLDs) GR-200A (natural LiF), which ensure measurements down to environmental dose level. An active detector of Geiger-Muller (GM) type is used for out of vessel dose rate measurement. Before their use the detectors were calibrated in a secondary gamma-ray standard (Cs-137 and Co-60) facility in term of air-kerma. The background measurement was carried-out in the period July -September 2005 in the outside position EX using the GM tube and in September 2005 inside the vacuum vessel using TLD detectors located in the 2 Upper irradiation end IE2. In the present work

  4. Investigation of the Buckling-Reactivity Conversion Coefficient using SRAC and MVP codes for UO2 Lattices in TCA experiments

    International Nuclear Information System (INIS)

    Le Dai Dien

    2008-01-01

    Benchmark experiments for International Reactor Physics Benchmark Experiments (IRPhE) Project carried out at TCA, the temperature effects on reactivity were studied for light water moderated and reflected UO 2 cores with/without soluble poisons. The buckling coefficient method using the measured critical water levels was proposed by Suzaki et al. The temperature dependence of buckling coefficient of reactivity and its variance by the core configurations of the benchmark experiments was investigated using SRAC and MVP calculations. From the calculations by SRAC as well as by MVP it is seen that the K-value can be taken as an average value only for each core with temperature changes which are considered as perturbation parameter. The difference between our calculations and benchmark results which uses constant K-value for all cores proves that the results depend on K-value and it play important role in defining reactivity effect using the water level worth method. (author)

  5. Critical experiments on low enriched uranyl nitrate solution with STACY

    International Nuclear Information System (INIS)

    Miyoshi, Yoshinori

    1996-01-01

    As the STACY started steady operations, systematic criticality data on low enriched uranyl nitrate solution system could be accumulated. Main experimental parameters for the cylindrical tank of 60 cm in diameter were uranium concentration and the reflector condition. Basic data on a simple geometry will be helpful for the validation of the standard criticality safety codes, and for evaluating the safety margin included in the criticality designs. Experiments on the reactivity effects of structural materials such as borated concrete and polyethylene are on schedule next year as the second series of experiments using 10 wt% enriched uranyl solution. Furthermore, neutron interacting experiments with two slab tanks will be performed to investigate the fundamental properties of neutron interaction effects between core tanks. These data will be useful for making more reasonable calculation models and for evaluating the safety margin in the criticality designs for the multiple unit system. (J.P.N.)

  6. Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Childs, R.L.; Rearden, B.T.

    1999-01-01

    Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community

  7. Critical experiments for large scale enriched uranium solution handling

    International Nuclear Information System (INIS)

    Tanner, J.E.; Forehand, H.M.

    1985-01-01

    The authors have performed 17 critical experiments with a concentrated aqueous uranyl nitrate solution contained in an annular cylindrical tank, with annular cylindrical absorbers of stainless steel and/or polyethylene inside. k/sub eff/ calculated by KENO IV, employing 16-group Hansen-Roach cross sections, average 0.977. There is a variation of the calculational bias among the separate experiments, but it is too small to allow assigning it to specific components of the equipment. They are now performing critical experiments with a more concentrated uranyl nitrate solution in pairs of very squat cylindrical tanks with disc shaped absorbers and reflectors of carbon steel, stainless steel, nitronic-50, plain and borated polyethylene. These experiments are in support of upgrading fuel reprocessing at the Idaho Chemical Processing Plant

  8. Report on the benchmark of products & processes and ranking of cruciality and criticity

    DEFF Research Database (Denmark)

    Islam, Aminul

    The objective of this deliverables is to present the results of benchmarking activities for each COTECH demonstrator and their planned production process. Each section is dedicated to a demonstrator mentioned below: Section 1 Innovative accommodable intra-ocular lens (BI) Section 2 Cheap substrat...... Micro socket for signal carriage of a hearing aid instruments (SONION) Section 8 Micro-cooling of electronic components (ATHERM)...

  9. Test One to Test Many: A Unified Approach to Quantum Benchmarks

    Science.gov (United States)

    Bai, Ge; Chiribella, Giulio

    2018-04-01

    Quantum benchmarks are routinely used to validate the experimental demonstration of quantum information protocols. Many relevant protocols, however, involve an infinite set of input states, of which only a finite subset can be used to test the quality of the implementation. This is a problem, because the benchmark for the finitely many states used in the test can be higher than the original benchmark calculated for infinitely many states. This situation arises in the teleportation and storage of coherent states, for which the benchmark of 50% fidelity is commonly used in experiments, although finite sets of coherent states normally lead to higher benchmarks. Here, we show that the average fidelity over all coherent states can be indirectly probed with a single setup, requiring only two-mode squeezing, a 50-50 beam splitter, and homodyne detection. Our setup enables a rigorous experimental validation of quantum teleportation, storage, amplification, attenuation, and purification of noisy coherent states. More generally, we prove that every quantum benchmark can be tested by preparing a single entangled state and measuring a single observable.

  10. International integral experiments databases in support of nuclear data and code validation

    International Nuclear Information System (INIS)

    Briggs, J. Blair; Gado, Janos; Hunter, Hamilton; Kodeli, Ivan; Salvatores, Massimo; Sartori, Enrico

    2002-01-01

    The OECD/NEA Nuclear Science Committee (NSC) has identified the need to establish international databases containing all the important experiments that are available for sharing among the specialists. The NSC has set up or sponsored specific activities to achieve this. The aim is to preserve them in an agreed standard format in computer accessible form, to use them for international activities involving validation of current and new calculational schemes including computer codes and nuclear data libraries, for assessing uncertainties, confidence bounds and safety margins, and to record measurement methods and techniques. The databases so far established or in preparation related to nuclear data validation cover the following areas: SINBAD - A Radiation Shielding Experiments database encompassing reactor shielding, fusion blanket neutronics, and accelerator shielding. ICSBEP - International Criticality Safety Benchmark Experiments Project Handbook, with more than 2500 critical configurations with different combination of materials and spectral indices. IRPhEP - International Reactor Physics Experimental Benchmarks Evaluation Project. The different projects are described in the following including results achieved, work in progress and planned. (author)

  11. Critical heat flux experiments in tight lattice core

    Energy Technology Data Exchange (ETDEWEB)

    Kureta, Masatoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2002-12-01

    Fuel rods of the Reduced-Moderation Water Reactor (RMWR) are so designed to be in tight lattices as to reduce moderation and achieve higher conversion ratio. As for the BWR type reactor coolant flow rate is reduced small compared with the existing BWR, so average void fraction comes to be langer. In order to evaluate thermo hydraulic characteristics of designed cores, critical heat flux experiments in tight lattice core have been conducted using simulated high pressure coolant loops for both the PWR and BWR seven fuel rod bundles. Experimental data on critical heat flux for full bundles have been accumulated and applied to assess the critical power of designed cores using existing codes. Evaluated results are conservative enough to satisfy the limiting condition. Further experiments on axial power distribution effects and 37 fuel rod bundle tests will be performed to validate thermohydraulic characteristics of designed cores. (T. Tanaka)

  12. Critical heat flux experiments in tight lattice core

    International Nuclear Information System (INIS)

    Kureta, Masatoshi

    2002-01-01

    Fuel rods of the Reduced-Moderation Water Reactor (RMWR) are so designed to be in tight lattices as to reduce moderation and achieve higher conversion ratio. As for the BWR type reactor coolant flow rate is reduced small compared with the existing BWR, so average void fraction comes to be langer. In order to evaluate thermo hydraulic characteristics of designed cores, critical heat flux experiments in tight lattice core have been conducted using simulated high pressure coolant loops for both the PWR and BWR seven fuel rod bundles. Experimental data on critical heat flux for full bundles have been accumulated and applied to assess the critical power of designed cores using existing codes. Evaluated results are conservative enough to satisfy the limiting condition. Further experiments on axial power distribution effects and 37 fuel rod bundle tests will be performed to validate thermohydraulic characteristics of designed cores. (T. Tanaka)

  13. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  14. Basic experiments of reactor physics using the critical assembly TCA

    International Nuclear Information System (INIS)

    Obara, Toru; Igashira, Masayuki; Sekimoto, Hiroshi; Nakajima, Ken; Suzaki, Takenori.

    1994-02-01

    This report is based on lectures given to graduate students of Tokyo Institute of Technology. It covers educational experiments conducted with the Tank-Type Critical Assembly (TCA) at Japan Atomic Energy Research Institute in July, 1993. During this period, the following basic experiments on reactor physics were performed: (1) Critical approach experiment, (2) Measurement of neutron flux distribution, (3) Measurement of power distribution, (4) Measurement of fuel rod worth distribution, (5) Measurement of safety sheet worth by the rod drop method. The principle of experiments, experimental procedure, and analysis of results are described in this report. (author)

  15. Comparison of results from the MCNP criticality validation suite using ENDF/B-VI and preliminary ENDF/B-VII nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Mosteller, R. D. (Russell D.)

    2004-01-01

    The MCNP Criticality Validation Suite is a collection of 31 benchmarks taken from the International Handbook of Evaluated Criticality Safety Benchmark Experiments. MCNP5 calculations clearly demonstrate that, overall, nuclear data for a preliminary version of ENDFB-VII produce better agreement with the benchmarks in the suite than do corresponding data from ENDF/B-VI. Additional calculations identify areas where improvements in the data still are needed. Based on results for the MCNP Criticality Validation Suite, the Pre-ENDF/B-VII nuclear data produce substantially better overall results than do their ENDF/B-VI counterparts. The calculated values for k{sub eff} for bare metal spheres and for an IEU cylinder reflected by normal uranium are in much better agreement with the benchmark values. In addition, the values of k{sub eff} for the bare metal spheres are much more consistent with those for corresponding metal spheres reflected by normal uranium or water. In addition, a long-standing controversy about the need for an ad hoc adjustment to the {sup 238}U resonance integral for thermal systems may finally be resolved. On the other hand, improvements still are needed in a number of areas. Those areas include intermediate-energy cross sections for {sup 235}U, angular distributions for elastic scattering in deuterium, and fast cross sections for {sup 237}Np.

  16. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  17. Continuous Automated Model EvaluatiOn (CAMEO) complementing the critical assessment of structure prediction in CASP12.

    Science.gov (United States)

    Haas, Jürgen; Barbato, Alessandro; Behringer, Dario; Studer, Gabriel; Roth, Steven; Bertoni, Martino; Mostaguir, Khaled; Gumienny, Rafal; Schwede, Torsten

    2018-03-01

    Every second year, the community experiment "Critical Assessment of Techniques for Structure Prediction" (CASP) is conducting an independent blind assessment of structure prediction methods, providing a framework for comparing the performance of different approaches and discussing the latest developments in the field. Yet, developers of automated computational modeling methods clearly benefit from more frequent evaluations based on larger sets of data. The "Continuous Automated Model EvaluatiOn (CAMEO)" platform complements the CASP experiment by conducting fully automated blind prediction assessments based on the weekly pre-release of sequences of those structures, which are going to be published in the next release of the PDB Protein Data Bank. CAMEO publishes weekly benchmarking results based on models collected during a 4-day prediction window, on average assessing ca. 100 targets during a time frame of 5 weeks. CAMEO benchmarking data is generated consistently for all participating methods at the same point in time, enabling developers to benchmark and cross-validate their method's performance, and directly refer to the benchmarking results in publications. In order to facilitate server development and promote shorter release cycles, CAMEO sends weekly email with submission statistics and low performance warnings. Many participants of CASP have successfully employed CAMEO when preparing their methods for upcoming community experiments. CAMEO offers a variety of scores to allow benchmarking diverse aspects of structure prediction methods. By introducing new scoring schemes, CAMEO facilitates new development in areas of active research, for example, modeling quaternary structure, complexes, or ligand binding sites. © 2017 Wiley Periodicals, Inc.

  18. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  19. Description and exploitation of benchmarks involving {sup 149}Sm, a fission product taking part of the burn up credit in spent fuels

    Energy Technology Data Exchange (ETDEWEB)

    Anno, J.; Poullot, G. [CEA Centre d`Etudes de Fontenay-aux-Roses, 92 (France). Inst. de Protection et de Surete Nucleaire; Fouillaud, P.; Grivot, P. [CEA Centre d`Etudes de Valduc, 21 - Is-sur-Tille (France)

    1995-12-31

    Up to now, there was no benchmark to validate the Fission Products (FPs) cross sections in criticality safety calculations. The protection and nuclear safety institute (IPSN) has begun an experimental program on 6 FPs ({sup 103}Rh, {sup 133}Cs, {sup 143}Nd, {sup 149}Sm, {sup 152}Sm, and {sup 155}Gd daughter of {sup 155}Eu) giving alone a decrease of reactivity equal to half the whole FPs in spent fuels (except Xe and I). Here are presented the experiments with the {sup 149}Sm and the results obtained with the APOLLO I-MORET III calculations codes. 11 experiments are carried out in a zircaloy tank of 3.5 1 containing slightly nitric acid solutions of Samarium (96,9% in weight of {sup 149S}m) at 0.1048 -0.2148 - 0.6262 g/l concentrations. It was placed in the middle of arrays of UO{sub 2} rods (4.742 % U5 weight %) at square pitch of 13 mm. The underwater height of the rods is the critical parameter. In addition, 7 experiments were performed with the same apparatus with water and boron proving a good experimental representativeness and a good accuracy of the calculations. As the reactivity worth of the Sm tank is between 2000 and 6000 10{sup -5}, the benchmarks are well representative and the cumulative absorption ratios show that {sup 149}Sm is well qualified under 1 eV. (authors). 8 refs., 7 figs., 6 tabs.

  20. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  1. IAEA coordinated research project (CRP) on 'Analytical and experimental benchmark analyses of accelerator driven systems'

    Energy Technology Data Exchange (ETDEWEB)

    Abanades, Alberto [Universidad Politecnica de Madrid (Spain); Aliberti, Gerardo; Gohar, Yousry; Talamo, Alberto [ANL, Argonne (United States); Bornos, Victor; Kiyavitskaya, Anna [Joint Institute of Power Eng. and Nucl. Research ' Sosny' , Minsk (Belarus); Carta, Mario [ENEA, Casaccia (Italy); Janczyszyn, Jerzy [AGH-University of Science and Technology, Krakow (Poland); Maiorino, Jose [IPEN, Sao Paulo (Brazil); Pyeon, Cheolho [Kyoto University (Japan); Stanculescu, Alexander [IAEA, Vienna (Austria); Titarenko, Yury [ITEP, Moscow (Russian Federation); Westmeier, Wolfram [Wolfram Westmeier GmbH, Ebsdorfergrund (Germany)

    2008-07-01

    In December 2005, the International Atomic Energy Agency (IAEA) has started a Coordinated Research Project (CRP) on 'Analytical and Experimental Benchmark Analyses of Accelerator Driven Systems'. The overall objective of the CRP, performed within the framework of the Technical Working Group on Fast Reactors (TWGFR) of IAEA's Nuclear Energy Department, is to increase the capability of interested Member States in developing and applying advanced reactor technologies in the area of long-lived radioactive waste utilization and transmutation. The specific objective of the CRP is to improve the present understanding of the coupling of an external neutron source (e.g. spallation source) with a multiplicative sub-critical core. The participants are performing computational and experimental benchmark analyses using integrated calculation schemes and simulation methods. The CRP aims at integrating some of the planned experimental demonstration projects of the coupling between a sub-critical core and an external neutron source (e.g. YALINA Booster in Belarus, and Kyoto University's Critical Assembly (KUCA)). The objective of these experimental programs is to validate computational methods, obtain high energy nuclear data, characterize the performance of sub-critical assemblies driven by external sources, and to develop and improve techniques for sub-criticality monitoring. The paper summarizes preliminary results obtained to-date for some of the CRP benchmarks. (authors)

  2. Summary Report of Consultants' Meeting on Accuracy of Experimental and Theoretical Nuclear Cross-Section Data for Ion Beam Analysis and Benchmarking

    International Nuclear Information System (INIS)

    Abriola, Daniel; Dimitriou, Paraskevi; Gurbich, Alexander F.

    2013-11-01

    A summary is given of a Consultants' Meeting assembled to assess the accuracy of experimental and theoretical nuclear cross-section data for Ion Beam Analysis and the role of benchmarking experiments. The participants discussed the different approaches to assigning uncertainties to evaluated data, and presented results of benchmark experiments performed in their laboratories. They concluded that priority should be given to the validation of cross- section data by benchmark experiments, and recommended that an experts meeting be held to prepare the guidelines, methodology and work program of a future coordinated project on benchmarking.

  3. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  4. Criticality calculations in reactor accelerator coupling experiment (Race)

    International Nuclear Information System (INIS)

    Reda, M.A.; Spaulding, R.; Hunt, A.; Harmon, J.F.; Beller, D.E.

    2005-01-01

    A Reactor Accelerator Coupling Experiment (RACE) is to be performed at the Idaho State University Idaho Accelerator Center (IAC). The electron accelerator is used to generate neutrons by inducing Bremsstrahlung photon-neutron reactions in a Tungsten- Copper target. This accelerator/target system produces a source of ∼1012 n/s, which can initiate fission reactions in the subcritical system. This coupling experiment between a 40-MeV electron accelerator and a subcritical system will allow us to predict and measure coupling efficiency, reactivity, and multiplication. In this paper, the results of the criticality and multiplication calculations, which were carried out using the Monte Carlo radiation transport code MCNPX, for different coupling design options are presented. The fuel plate arrangements and the surrounding tank dimensions have been optimized. Criticality using graphite instead of water for reflector/moderator outside of the core region has been studied. The RACE configuration at the IAC will have a criticality (k-effective) of about 0,92 and a multiplication of about 10. (authors)

  5. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  6. MCNPTM criticality primer and training experiences

    International Nuclear Information System (INIS)

    Briesmeister, J.; Forster, R.A.; Busch, R.

    1995-01-01

    With the closure of many experimental facilities, the nuclear criticality safety analyst is increasingly required to rely on computer calculations to identify safe limits for the handling and storage of fissile materials. However, the analyst may have little experience with the specific codes available at his or her facility. Usually, the codes are quite complex, black boxes capable of analyzing numerous problems with a myriad of input options. Documentation for these codes is designed to cover all the possible configurations and types of analyses but does not give much detail on any particular type of analysis. For criticality calculations, the user of a code is primarily interested in the value of the effective multiplication factor for a system (k eff ). Most codes will provide this, and truckloads of other information that may be less pertinent to criticality calculations. Based on discussions with code users in the nuclear criticality safety community, it was decided that a simple document discussing the ins and outs of criticality calculations with specific codes would be quite useful. The Transport Methods Group, XTM, at Los Alamos National Laboratory (LANL) decided to develop a primer for criticality calculations with their Monte Carlo code, MCNP. This was a joint task between LANL with a knowledge and understanding of the nuances and capabilities of MCNP and the University of New Mexico with a knowledge and understanding of nuclear criticality safety calculations and educating first time users of neutronics calculations. The initial problem was that the MCNP manual just contained too much information. Almost everything one needs to know about MCNP can be found in the manual; the problem is that there is more information than a user requires to do a simple k eff calculation. The basic concept of the primer was to distill the manual to create a document whose only focus was criticality calculations using MCNP

  7. Educational reactor-physics experiments with the critical assemble TCA

    Energy Technology Data Exchange (ETDEWEB)

    Tsutsui, Hiroaki; Okubo, Masaaki; Igashira, Masayuki [Tokyo Inst. of Tech. (Japan); Horiki, Oichiro; Suzaki, Takenori

    1997-10-01

    The Tank-Type Critical Assembly (TCA) of Japan Atomic Energy Research Institute is research equipment for light water reactor physics. In the present report, the lectures given to the graduate students of Tokyo Institute of Technology who participated in the educational experiment course held on 26-30 August at TCA are rearranged to provide useful information for those who will implement educational basic experiments with TCA in the future. This report describes the principles, procedures, and data analyses for (1) Critical approach and Exponential experiment, (2) Measurement of neutron flux distribution, (3) Measurement of power distribution, (4) Measurement of fuel rod worth distribution, and (5) Measurement of safety plate worth by the rod drop method. (author)

  8. Educational reactor-physics experiments with the critical assembly TCA

    International Nuclear Information System (INIS)

    Tsutsui, Hiroaki; Okubo, Masaaki; Igashira, Masayuki; Horiki, Oichiro; Suzaki, Takenori.

    1997-10-01

    The Tank-Type Critical Assembly (TCA) of Japan Atomic Energy Research Institute is research equipment for light water reactor physics. In the present report, the lectures given to the graduate students of Tokyo Institute of Technology who participated in the educational experiment course held on 26-30 August at TCA are rearranged to provide useful information for those who will implement educational basic experiments with TCA in the future. This report describes the principles, procedures, and data analyses for 1) Critical approach and Exponential experiment, 2) Measurement of neutron flux distribution, 3) Measurement of power distribution, 4) Measurement of fuel rod worth distribution, and 5) Measurement of safety plate worth by the rod drop method. (author)

  9. Reactor Dynamics Experiments with a Sub-Critical Assembly

    International Nuclear Information System (INIS)

    Miley, G.H.; Yang, Y.; Wu, L.; Momota, H.

    2004-01-01

    A resurgence in use of nuclear power is now underway worldwide. However due to the shutdown of many university research reactors , student laboratories must rely more heavily on use of sub-critical assemblies. Here a driven sub-critical is described that uses a cylindrical Inertial Electrostatic Confinement (IEC) device to provide a fusion neutron source. The small IEC neutron source would be inserted in a fuel element position, with its power input controlled externally at a control panel. This feature opens the way to use of the critical assembly for a number of transient experiments such as sub-critical pulsing and neutron wave propagation. That in turn adds important new insights and excitement for the student teaching laboratory

  10. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of bench-marking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  11. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  12. Assessment of data-assisted prediction by inclusion of crosslinking/mass-spectrometry and small angle X-ray scattering data in the 12th Critical Assessment of protein Structure Prediction experiment.

    Science.gov (United States)

    Tamò, Giorgio E; Abriata, Luciano A; Fonti, Giulia; Dal Peraro, Matteo

    2018-03-01

    Integrative modeling approaches attempt to combine experiments and computation to derive structure-function relationships in complex molecular assemblies. Despite their importance for the advancement of life sciences, benchmarking of existing methodologies is rather poor. The 12 th round of the Critical Assessment of protein Structure Prediction (CASP) offered a unique niche to benchmark data and methods from two kinds of experiments often used in integrative modeling, namely residue-residue contacts obtained through crosslinking/mass-spectrometry (CLMS), and small-angle X-ray scattering (SAXS) experiments. Upon assessment of the models submitted by predictors for 3 targets assisted by CLMS data and 11 targets by SAXS data, we observed no significant improvement when compared to the best data-blind models, although most predictors did improve relative to their own data-blind predictions. Only for target Tx892 of the CLMS-assisted category and for target Ts947 of the SAXS-assisted category, there was a net, albeit mild, improvement relative to the best data-blind predictions. We discuss here possible reasons for the relatively poor success, which point rather to inconsistencies in the data sources rather than in the methods, to which a few groups were less sensitive. We conclude with suggestions that could improve the potential of data integration in future CASP rounds in terms of experimental data production, methods development, data management and prediction assessment. © 2017 Wiley Periodicals, Inc.

  13. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, Mark D. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mausolff, Zander [Univ. of Florida, Gainesville, FL (United States); Weems, Zach [Univ. of Florida, Gainesville, FL (United States); Popp, Dustin [Univ. of Florida, Gainesville, FL (United States); Smith, Kristin [Univ. of Florida, Gainesville, FL (United States); Shriver, Forrest [Univ. of Florida, Gainesville, FL (United States); Goluoglu, Sedat [Univ. of Florida, Gainesville, FL (United States); Prince, Zachary [Texas A & M Univ., College Station, TX (United States); Ragusa, Jean [Texas A & M Univ., College Station, TX (United States)

    2016-08-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\cite{Rattlesnake} and the fuels performance code BISON. Other validation projects outside of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.

  14. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Mausolff, Zander; Weems, Zach; Popp, Dustin; Smith, Kristin; Shriver, Forrest; Goluoglu, Sedat; Prince, Zachary; Ragusa, Jean

    2016-01-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\citelesnake) and the fuels performance code BISON. Other validation projects outside of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.

  15. Benchmarking and the laboratory

    Science.gov (United States)

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been