Reactor critical benchmark calculations for burnup credit applications
International Nuclear Information System (INIS)
Renier, J.P.; Parks, C.V.
1990-01-01
In the criticality safety analyses for the development and certification of spent fuel casks, the current approach requires the assumption of ''fresh fuel'' isotopics. It has been shown that the removal of the ''fresh fuel'' assumption and the use of spent fuel isotopics (''burnup credit'') greatly increases the payload of spent fuel casks by reducing the reactivity of the fuel. Regulatory approval of burnup credit and the requirements of ANSI/ANS 8.1 specify that calculational methods for away-from-reactor criticality safety analyses be validated against experimental measurements. Criticality analyses for low-enriched lattices of fuel pins using the ''fresh fuel isotopics'' assumption have been widely benchmarked against applicable critical experiments. However, the same computational methods have not been benchmarked against criticals containing spent fuel because of the non-existence of spent fuel critical experiments. Commercial reactors offer an excellent and inexhaustible source of critical configurations against which criticality analyses can be benchmarked for spent fuel configurations. This document provides brief descriptions of the benchmarks and the computational methods for the criticality analyses. 8 refs., 1 fig., 1 tab
Benchmark calculations of the solution-fuel criticality experiments by SRAC code system
International Nuclear Information System (INIS)
Senuma, Ichiro; Miyoshi, Yoshinori; Suzaki, Takenori; Kobayashi, Iwao
1984-06-01
Benchmark calculations were performed by using newly developed SRAC (Standard Reactor Analysis Code) system and nuclear data library based upon JENDL-2. The 34 benchmarks include variety of composition, concentration and configuration of Pu homogeneous and U/Pu homogeneous systems (nitrate, mainly), also include UO 2 /PuO 2 rods in fissile solution: a simplified model of the dissolver process of the fuel reprocessing plant. Calculation results shows good agreement with Monte Carlo method. This code-evaluation work has been done for the the part of the Detailed Design of CSEF (Critical Satety Experimental Facility), which is now in Progress. (author)
Validation of VHTRC calculation benchmark of critical experiment using the MCB code
Directory of Open Access Journals (Sweden)
Stanisz Przemysław
2016-01-01
Full Text Available The calculation benchmark problem Very High Temperature Reactor Critical (VHTR a pin-in-block type core critical assembly has been investigated with the Monte Carlo Burnup (MCB code in order to validate the latest version of Nuclear Data Library based on ENDF format. Executed benchmark has been made on the basis of VHTR benchmark available from the International Handbook of Evaluated Reactor Physics Benchmark Experiments. This benchmark is useful for verifying the discrepancies in keff values between various libraries and experimental values. This allows to improve accuracy of the neutron transport calculations that may help in designing the high performance commercial VHTRs. Almost all safety parameters depend on the accuracy of neutron transport calculation results that, in turn depend on the accuracy of nuclear data libraries. Thus, evaluation of the libraries applicability to VHTR modelling is one of the important subjects. We compared the numerical experiment results with experimental measurements using two versions of available nuclear data (ENDF-B-VII.1 and JEFF-3.2 prepared for required temperatures. Calculations have been performed with the MCB code which allows to obtain very precise representation of complex VHTR geometry, including the double heterogeneity of a fuel element. In this paper, together with impact of nuclear data, we discuss also the impact of different lattice modelling inside the fuel pins. The discrepancies of keff have been successfully observed and show good agreement with each other and with the experimental data within the 1 σ range of the experimental uncertainty. Because some propagated discrepancies observed, we proposed appropriate corrections in experimental constants which can improve the reactivity coefficient dependency. Obtained results confirm the accuracy of the new Nuclear Data Libraries.
OECD/NEA burnup credit calculational criticality benchmark Phase I-B results
International Nuclear Information System (INIS)
DeHart, M.D.; Parks, C.V.; Brady, M.C.
1996-06-01
In most countries, criticality analysis of LWR fuel stored in racks and casks has assumed that the fuel is fresh with the maximum allowable initial enrichment. This assumption has led to the design of widely spaced and/or highly poisoned storage and transport arrays. If credit is assumed for fuel burnup, initial enrichment limitations can be raised in existing systems, and more compact and economical arrays can be designed. Such reliance on the reduced reactivity of spent fuel for criticality control is referred to as burnup credit. The Burnup Credit Working Group, formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development, has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods agree to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods agree within 11% about the average for all fission products studied. Most deviations are less than 10%, and many are less than 5%. The exceptions are Sm 149, Sm 151, and Gd 155
OECD/NEA burnup credit calculational criticality benchmark Phase I-B results
Energy Technology Data Exchange (ETDEWEB)
DeHart, M.D.; Parks, C.V. [Oak Ridge National Lab., TN (United States); Brady, M.C. [Sandia National Labs., Las Vegas, NV (United States)
1996-06-01
In most countries, criticality analysis of LWR fuel stored in racks and casks has assumed that the fuel is fresh with the maximum allowable initial enrichment. This assumption has led to the design of widely spaced and/or highly poisoned storage and transport arrays. If credit is assumed for fuel burnup, initial enrichment limitations can be raised in existing systems, and more compact and economical arrays can be designed. Such reliance on the reduced reactivity of spent fuel for criticality control is referred to as burnup credit. The Burnup Credit Working Group, formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development, has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods agree to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods agree within 11% about the average for all fission products studied. Most deviations are less than 10%, and many are less than 5%. The exceptions are Sm 149, Sm 151, and Gd 155.
OECD/NEA Burnup Credit Calculational Criticality Benchmark Phase I-B Results
Energy Technology Data Exchange (ETDEWEB)
DeHart, M.D.
1993-01-01
Burnup credit is an ongoing technical concern for many countries that operate commercial nuclear power reactors. In a multinational cooperative effort to resolve burnup credit issues, a Burnup Credit Working Group has been formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development. This working group has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide, and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods are in agreement to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods are within 11% agreement about the average for all fission products studied. Furthermore, most deviations are less than 10%, and many are less than 5%. The exceptions are {sup 149}Sm, {sup 151}Sm, and {sup 155}Gd.
Handbook of critical experiments benchmarks
International Nuclear Information System (INIS)
Durst, B.M.; Bierman, S.R.; Clayton, E.D.
1978-03-01
Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input
Energy Technology Data Exchange (ETDEWEB)
Ivanova, T.; Laville, C. [Institut de Radioprotection et de Surete Nucleaire IRSN, BP 17, 92262 Fontenay aux Roses (France); Dyrda, J. [Atomic Weapons Establishment AWE, Aldermaston, Reading, RG7 4PR (United Kingdom); Mennerdahl, D. [E Mennerdahl Systems EMS, Starvaegen 12, 18357 Taeby (Sweden); Golovko, Y.; Raskach, K.; Tsiboulia, A. [Inst. for Physics and Power Engineering IPPE, 1, Bondarenko sq., 249033 Obninsk (Russian Federation); Lee, G. S.; Woo, S. W. [Korea Inst. of Nuclear Safety KINS, 62 Gwahak-ro, Yuseong-gu, Daejeon 305-338 (Korea, Republic of); Bidaud, A.; Sabouri, P. [Laboratoire de Physique Subatomique et de Cosmologie LPSC, CNRS-IN2P3/UJF/INPG, Grenoble (France); Patel, A. [U.S. Nuclear Regulatory Commission (NRC), Washington, DC 20555-0001 (United States); Bledsoe, K.; Rearden, B. [Oak Ridge National Laboratory ORNL, M.S. 6170, P.O. Box 2008, Oak Ridge, TN 37831 (United States); Gulliford, J.; Michel-Sendis, F. [OECD/NEA, 12, Bd des Iles, 92130 Issy-les-Moulineaux (France)
2012-07-01
The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)
Energy Technology Data Exchange (ETDEWEB)
Leal, L.C.; Wright, R.Q.
1996-10-01
In this report we investigate the adequacy of the available {sup 233}U cross-section data for calculation of experimental critical systems. The {sup 233}U evaluations provided in two evaluated nuclear data libraries, the U.S. Data Bank [ENDF/B (Evaluated Nuclear Data Files)] and the Japanese Data Bank [JENDL (Japanese Evaluated Nuclear Data Library)] are examined. Calculations were performed for six thermal and ten fast experimental critical systems using the S{sub n} transport XSDRNPM code. To verify the performance of the {sup 233}U cross-section data for nuclear criticality safety application in which the neutron energy spectrum is predominantly in the epithermal energy range, calculations of four numerical benchmark systems with energy spectra in the intermediate energy range were done. These calculations serve only as an indication of the difference in calculated results that may be expected when the two {sup 233}U cross-section evaluations are used for problems with neutron spectra in the intermediate energy range. Additionally, comparisons of experimental and calculated central fission rate ratios were also made. The study has suggested that an ad hoc {sup 233}U evaluation based on the JENDL library provides better overall results for both fast and thermal experimental critical systems.
Energy Technology Data Exchange (ETDEWEB)
Leal, L.C.
1993-01-01
In this report we investigate the adequacy of the available {sup 233}U cross-section data for calculation of experimental critical systems. The {sup 233}U evaluations provided in two evaluated nuclear data libraries, the U. S. Data Bank [ENDF/B (Evaluated Nuclear Data Files)] and the Japanese Data Bank [JENDL (Japanese Evaluated Nuclear Data Library)] are examined. Calculations were performed for six thermal and ten fast experimental critical systems using the Sn transport XSDRNPM code. To verify the performance of the {sup 233}U cross-section data for nuclear criticality safety application in which the neutron energy spectrum is predominantly in the epithermal energy range, calculations of four numerical benchmark systems with energy spectra in the intermediate energy range were done. These calculations serve only as an indication of the difference in calculated results that may be expected when the two {sup 233}U cross-section evaluations are used for problems with neutron spectra in the intermediate energy range. Additionally, comparisons of experimental and calculated central fission rate ratios were also made. The study has suggested that an ad hoc {sup 233}U evaluation based on the JENDL library provides better overall results for both fast and thermal experimental critical systems.
Benchmark calculation of SCALE-PC 4.3 CSAS6 module and burnup credit criticality analysis
Energy Technology Data Exchange (ETDEWEB)
Shin, Hee Sung; Ro, Seong Gy; Shin, Young Joon; Kim, Ik Soo [Korea Atomic Energy Research Institute, Taejon (Korea)
1998-12-01
Calculation biases of SCALE-PC CSAS6 module for PWR spent fuel, metallized spent fuel and solution of nuclear materials have been determined on the basis of the benchmark to be 0.01100, 0.02650 and 0.00997, respectively. With the aid of the code system, nuclear criticality safety analysis for the spent fuel storage pool has been carried out to determine the minimum burnup of spent fuel required for safe storage. The criticality safety analysis is performed using three types of isotopic composition of spent fuel: ORIGEN2-calculated isotopic compositions; the conservative inventory obtained from the multiplication of ORIGEN2-calculated isotopic compositions by isotopic correction factors; the conservative inventory of only U, Pu and {sup 241}Am. The results show that the minimum burnup for three cases are 990,6190 and 7270 MWd/tU, respectively in the case of 5.0 wt% initial enriched spent fuel. (author). 74 refs., 68 figs., 35 tabs.
Energy Technology Data Exchange (ETDEWEB)
Okuno, Hiroshi; Naito, Yoshitaka; Suyama, Kenya [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
2002-02-01
The report describes the final results of the Phase IIIB Benchmark conducted by the Expert Group on Burnup Credit Criticality Safety under the auspices of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD). The Benchmark was intended to compare the predictability of current computer code and data library combinations for the atomic number densities of an irradiated PWR fuel assembly model. The fuel assembly was irradiated under specific power of 25.6 MW/tHM up to 40 GWd/tHM and cooled for five years. The void fraction was assumed to be uniform throughout the channel box and constant, at 0, 40 and 70%, during burnup. In total, 16 results were submitted from 13 institutes of 7 countries. The calculated atomic number densities of 12 actinides and 20 fission product nuclides were found to be for the most part within a range of {+-}10% relative to the average, although some results, esp. {sup 155}Eu and gadolinium isotopes, exceeded the band, which will require further investigation. Pin-wise burnup results agreed well among the participants. The results in the infinite neutron multiplication factor k{sub {infinity}} also accorded well with each other for void fractions of 0 and 40%; however some results deviated from the averaged value noticeably for the void fraction of 70%. (author)
International Nuclear Information System (INIS)
Okuno, Hiroshi; Naito, Yoshitaka; Suyama, Kenya
2002-02-01
The report describes the final results of the Phase IIIB Benchmark conducted by the Expert Group on Burnup Credit Criticality Safety under the auspices of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD). The Benchmark was intended to compare the predictability of current computer code and data library combinations for the atomic number densities of an irradiated PWR fuel assembly model. The fuel assembly was irradiated under specific power of 25.6 MW/tHM up to 40 GWd/tHM and cooled for five years. The void fraction was assumed to be uniform throughout the channel box and constant, at 0, 40 and 70%, during burnup. In total, 16 results were submitted from 13 institutes of 7 countries. The calculated atomic number densities of 12 actinides and 20 fission product nuclides were found to be for the most part within a range of ±10% relative to the average, although some results, esp. 155 Eu and gadolinium isotopes, exceeded the band, which will require further investigation. Pin-wise burnup results agreed well among the participants. The results in the infinite neutron multiplication factor k ∞ also accorded well with each other for void fractions of 0 and 40%; however some results deviated from the averaged value noticeably for the void fraction of 70%. (author)
Pool critical assembly pressure vessel facility benchmark
International Nuclear Information System (INIS)
Remec, I.; Kam, F.B.K.
1997-07-01
This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used
Benchmark On Sensitivity Calculation (Phase III)
Energy Technology Data Exchange (ETDEWEB)
Ivanova, Tatiana [IRSN; Laville, Cedric [IRSN; Dyrda, James [Atomic Weapons Establishment; Mennerdahl, Dennis [E. Mennerdahl Systems; Golovko, Yury [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Raskach, Kirill [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Tsiboulia, Anatoly [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Lee, Gil Soo [Korea Institute of Nuclear Safety (KINS); Woo, Sweng-Woong [Korea Institute of Nuclear Safety (KINS); Bidaud, Adrien [Labratoire de Physique Subatomique et de Cosmolo-gie (LPSC); Patel, Amrit [NRC; Bledsoe, Keith C [ORNL; Rearden, Bradley T [ORNL; Gulliford, J. [OECD Nuclear Energy Agency
2012-01-01
The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.
Present status of International Criticality Safety Benchmark Evaluation Project (ICSBEP)
International Nuclear Information System (INIS)
Miyoshi, Yoshinori
2000-01-01
The International Criticality Safety Evaluation Project, ICSBEP was designed to identify and evaluate a comprehensive set of critical experiment benchmark data. Compilation of the data into a standardized format are made by reviewing original and subsequently revised documentation for calculating each experiment with standard criticality safety codes. Five handbooks of evaluated criticality safety benchmark experiments have been published since 1995. (author)
Benchmark calculation of subchannel analysis codes
International Nuclear Information System (INIS)
1996-02-01
In order to evaluate the analysis capabilities of various subchannel codes used in thermal-hydraulic design of light water reactors, benchmark calculations were performed. The selected benchmark problems and major findings obtained by the calculations were as follows: (1)As for single-phase flow mixing experiments between two channels, the calculated results of water temperature distribution along the flow direction were agreed with experimental results by tuning turbulent mixing coefficients properly. However, the effect of gap width observed in the experiments could not be predicted by the subchannel codes. (2)As for two-phase flow mixing experiments between two channels, in high water flow rate cases, the calculated distributions of air and water flows in each channel were well agreed with the experimental results. In low water flow cases, on the other hand, the air mixing rates were underestimated. (3)As for two-phase flow mixing experiments among multi-channels, the calculated mass velocities at channel exit under steady-state condition were agreed with experimental values within about 10%. However, the predictive errors of exit qualities were as high as 30%. (4)As for critical heat flux(CHF) experiments, two different results were obtained. A code indicated that the calculated CHF's using KfK or EPRI correlations were well agreed with the experimental results, while another code suggested that the CHF's were well predicted by using WSC-2 correlation or Weisman-Pei mechanistic model. (5)As for droplets entrainment and deposition experiments, it was indicated that the predictive capability was significantly increased by improving correlations. On the other hand, a remarkable discrepancy between codes was observed. That is, a code underestimated the droplet flow rate and overestimated the liquid film flow rate in high quality cases, while another code overestimated the droplet flow rate and underestimated the liquid film flow rate in low quality cases. (J.P.N.)
IAEA sodium void reactivity benchmark calculations
International Nuclear Information System (INIS)
Hill, R.N.; Finck, P.J.
1992-01-01
In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated
BENCHMARKING ORTEC ISOTOPIC MEASUREMENTS AND CALCULATIONS
Energy Technology Data Exchange (ETDEWEB)
Dewberry, R; Raymond Sigg, R; Vito Casella, V; Nitin Bhatt, N
2008-09-29
This report represents a description of compiled benchmark tests conducted to probe and to demonstrate the extensive utility of the Ortec ISOTOPIC {gamma}-ray analysis computer program. The ISOTOPIC program performs analyses of {gamma}-ray spectra applied to specific acquisition configurations in order to apply finite-geometry correction factors and sample-matrix-container photon absorption correction factors. The analysis program provides an extensive set of preset acquisition configurations to which the user can add relevant parameters in order to build the geometry and absorption correction factors that the program determines from calculus and from nuclear g-ray absorption and scatter data. The Analytical Development Section field nuclear measurement group of the Savannah River National Laboratory uses the Ortec ISOTOPIC analysis program extensively for analyses of solid waste and process holdup applied to passive {gamma}-ray acquisitions. Frequently the results of these {gamma}-ray acquisitions and analyses are to determine compliance with facility criticality safety guidelines. Another use of results is to designate 55-gallon drum solid waste as qualified TRU waste3 or as low-level waste. Other examples of the application of the ISOTOPIC analysis technique to passive {gamma}-ray acquisitions include analyses of standard waste box items and unique solid waste configurations. In many passive {gamma}-ray acquisition circumstances the container and sample have sufficient density that the calculated energy-dependent transmission correction factors have intrinsic uncertainties in the range 15%-100%. This is frequently the case when assaying 55-gallon drums of solid waste with masses of up to 400 kg and when assaying solid waste in extensive unique containers. Often an accurate assay of the transuranic content of these containers is not required, but rather a good defensible designation as >100 nCi/g (TRU waste) or <100 nCi/g (low level solid waste) is required. In
International Criticality Safety Benchmark Evaluation Project (ICSBEP) - ICSBEP 2015 Handbook
International Nuclear Information System (INIS)
Bess, John D.
2015-01-01
The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy (DOE). The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Nuclear Energy Agency (NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirements and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross-section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span approximately 69000 pages and contain 567 evaluations with benchmark specifications for 4874 critical, near-critical or subcritical configurations, 31 criticality alarm placement/shielding configurations with multiple dose points for each, and 207 configurations that have been categorised as fundamental physics measurements that are relevant to criticality safety applications. New to the handbook are benchmark specifications for neutron activation foil and thermoluminescent dosimeter measurements performed at the SILENE critical assembly in Valduc, France as part of a joint venture in 2010 between the US DOE and the French Alternative Energies and Atomic Energy Commission (CEA). A photograph of this experiment is shown on the front cover. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these
The International Criticality Safety Benchmark Evaluation Project (ICSBEP)
International Nuclear Information System (INIS)
Briggs, J.B.
2003-01-01
The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organisation for Economic Cooperation and Development (OECD) - Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Yugoslavia, Kazakhstan, Israel, Spain, and Brazil are now participating. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled 'International Handbook of Evaluated Criticality Safety Benchmark Experiments.' The 2003 Edition of the Handbook contains benchmark model specifications for 3070 critical or subcritical configurations that are intended for validating computer codes that calculate effective neutron multiplication and for testing basic nuclear data. (author)
COVE 2A Benchmarking calculations using NORIA
International Nuclear Information System (INIS)
Carrigan, C.R.; Bixler, N.E.; Hopkins, P.L.; Eaton, R.R.
1991-10-01
Six steady-state and six transient benchmarking calculations have been performed, using the finite element code NORIA, to simulate one-dimensional infiltration into Yucca Mountain. These calculations were made to support the code verification (COVE 2A) activity for the Yucca Mountain Site Characterization Project. COVE 2A evaluates the usefulness of numerical codes for analyzing the hydrology of the potential Yucca Mountain site. Numerical solutions for all cases were found to be stable. As expected, the difficulties and computer-time requirements associated with obtaining solutions increased with infiltration rate. 10 refs., 128 figs., 5 tabs
Compilation report of VHTRC temperature coefficient benchmark calculations
International Nuclear Information System (INIS)
Yasuda, Hideshi; Yamane, Tsuyoshi
1995-11-01
A calculational benchmark problem has been proposed by JAERI to an IAEA Coordinated Research Program, 'Verification of Safety Related Neutronic Calculation for Low-enriched Gas-cooled Reactors' to investigate the accuracy of calculation results obtained by using codes of the participating countries. This benchmark is made on the basis of assembly heating experiments at a pin-in block type critical assembly, VHTRC. Requested calculation items are the cell parameters, effective multiplication factor, temperature coefficient of reactivity, reaction rates, fission rate distribution, etc. Seven institutions from five countries have joined the benchmark works. Calculation results are summarized in this report with some remarks by the authors. Each institute analyzed the problem by applying the calculation code system which was prepared for the HTGR development of individual country. The values of the most important parameter, k eff , by all institutes showed good agreement with each other and with the experimental ones within 1%. The temperature coefficient agreed within 13%. The values of several cell parameters calculated by several institutes did not agree with the other's ones. It will be necessary to check the calculation conditions again for getting better agreement. (J.P.N.)
Benchmarking Ortec ISOTOPIC measurements and calculations
International Nuclear Information System (INIS)
This paper represents a description of eight compiled benchmark tests conducted to probe and to demonstrate the extensive utility of the Ortec ISOTOPIC gamma-ray analysis software program. The paper describes tests of the programs capability to perform finite geometry correction factors and sample-matrix-container photon absorption correction factors. Favorable results are obtained in all benchmark tests. (author)
Standard Guide for Benchmark Testing of Light Water Reactor Calculations
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...
Energy Technology Data Exchange (ETDEWEB)
J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Sartori
2009-09-01
High-quality integral benchmark experiments have always been a priority for criticality safety. However, interest in integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of future criticality safety needs to support next generation reactor and advanced fuel cycle concepts. The importance of drawing upon existing benchmark data is becoming more apparent because of dwindling availability of critical facilities worldwide and the high cost of performing new experiments. Integral benchmark data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the International Handbook of Reactor Physics Benchmark Experiments are widely used. Benchmark data have been added to these two handbooks since the last Nuclear Criticality Safety Division Topical Meeting in Knoxville, Tennessee (September 2005). This paper highlights these additions.
International Nuclear Information System (INIS)
Briggs, J. Blair; Scott, Lori; Rugama, Yolanda; Sartori, Enrico
2009-01-01
High-quality integral benchmark experiments have always been a priority for criticality safety. However, interest in integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of future criticality safety needs to support next generation reactor and advanced fuel cycle concepts. The importance of drawing upon existing benchmark data is becoming more apparent because of dwindling availability of critical facilities worldwide and the high cost of performing new experiments. Integral benchmark data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the International Handbook of Reactor Physics Benchmark Experiments are widely used. Benchmark data have been added to these two handbooks since the last Nuclear Criticality Safety Division Topical Meeting in Knoxville, Tennessee (September 2005). This paper highlights these additions.
Danish calculations of the NEACRP pin-power benchmark
International Nuclear Information System (INIS)
Hoejerup, C.F.
1994-01-01
This report describes calculations performed for the NEACRP pin-power benchmark. The calculations are made with the code NEM2D, a diffusion theory code based on the nodal expansion method. (au) (15 tabs., 15 ills., 5 refs.)
International Nuclear Information System (INIS)
Dos Santos, Adimir; Siqueira, Paulo de Tarso D.; Andrade e Silva, Graciete Simões; Grant, Carlos; Tarazaga, Ariel E.; Barberis, Claudia
2013-01-01
In year 2008 the Atomic Energy National Commission (CNEA) of Argentina, and the Brazilian Institute of Energetic and Nuclear Research (IPEN), under the frame of Nuclear Energy Argentine Brazilian Agreement (COBEN), among many others, included the project “Validation and Verification of Calculation Methods used for Research and Experimental Reactors . At this time, it was established that the validation was to be performed with models implemented in the deterministic codes HUEMUL and PUMA (cell and reactor codes) developed by CNEA and those ones implemented in MCNP by CNEA and IPEN. The necessary data for these validations would correspond to theoretical-experimental reference cases in the research reactor IPEN/MB-01 located in São Paulo, Brazil. The staff of the group Reactor and Nuclear Power Studies (SERC) of CNEA, from the argentine side, performed calculations with deterministic models (HUEMUL-PUMA) and probabilistic methods (MCNP) modeling a great number of physical situations of de reactor, which previously have been studied and modeled by members of the Center of Nuclear Engineering of the IPEN, whose results were extensively provided to CNEA. In this paper results for critical configurations are shown. (author)
Benchmark assemblies of the Los Alamos Critical Assemblies Facility
International Nuclear Information System (INIS)
Dowdy, E.J.
1985-01-01
Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described
Benchmark assemblies of the Los Alamos critical assemblies facility
International Nuclear Information System (INIS)
Dowdy, E.J.
1985-01-01
Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described
Benchmark assemblies of the Los Alamos critical assemblies facility
International Nuclear Information System (INIS)
Dowdy, E.J.
1986-01-01
Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described. (author)
International Nuclear Information System (INIS)
Lara, Rafael G.; Maiorino, Jose R.
2013-01-01
This work aimed at the implementation and qualification of MCNP code in a supercomputer of the Universidade Federal do ABC, so that may be available a next-generation simulation tool for precise calculations of nuclear reactors and systems subject to radiation. The implementation of this tool will have multidisciplinary applications, covering various areas of engineering (nuclear, aerospace, biomedical), radiation physics and others
WIPP Benchmark calculations with the large strain SPECTROM codes
Energy Technology Data Exchange (ETDEWEB)
Callahan, G.D.; DeVries, K.L. [RE/SPEC, Inc., Rapid City, SD (United States)
1995-08-01
This report provides calculational results from the updated Lagrangian structural finite-element programs SPECTROM-32 and SPECTROM-333 for the purpose of qualifying these codes to perform analyses of structural situations in the Waste Isolation Pilot Plant (WIPP). Results are presented for the Second WIPP Benchmark (Benchmark II) Problems and for a simplified heated room problem used in a parallel design calculation study. The Benchmark II problems consist of an isothermal room problem and a heated room problem. The stratigraphy involves 27 distinct geologic layers including ten clay seams of which four are modeled as frictionless sliding interfaces. The analyses of the Benchmark II problems consider a 10-year simulation period. The evaluation of nine structural codes used in the Benchmark II problems shows that inclusion of finite-strain effects is not as significant as observed for the simplified heated room problem, and a variety of finite-strain and small-strain formulations produced similar results. The simplified heated room problem provides stratigraphic complexity equivalent to the Benchmark II problems but neglects sliding along the clay seams. The simplified heated problem does, however, provide a calculational check case where the small strain-formulation produced room closures about 20 percent greater than those obtained using finite-strain formulations. A discussion is given of each of the solved problems, and the computational results are compared with available published results. In general, the results of the two SPECTROM large strain codes compare favorably with results from other codes used to solve the problems.
MCNP and OMEGA criticality calculations
International Nuclear Information System (INIS)
Seifert, E.
1998-04-01
The reliability of OMEGA criticality calculations is shown by a comparison with calculations by the validated and widely used Monte Carlo code MCNP. The criticality of 16 assemblies with uranium as fissionable is calculated with the codes MCNP (Version 4A, ENDF/B-V cross sections), MCNP (Version 4B, ENDF/B-VI cross sections), and OMEGA. Identical calculation models are used for the three codes. The results are compared mutually and with the experimental criticality of the assemblies. (orig.)
Criticality analysis of ASTRA critical facility using CITALDI diffusion calculation
International Nuclear Information System (INIS)
Zuhair; Aziz, Ferhat; Suwoto
2002-01-01
The ASTRA critical facility at the Russian Research Center - Kurchatov University is being used to investigate the reactor physics characteristics for High Temperature Gas Cooled Reactor (HTGCR) PBMR, South Africa. ASTRA was built with main purpose to provide various Benchmark experiments to obtained the integral data for PBMR core design using various simulation assemblies. In this paper, the criticality of ASTRA critical facility was analysed based on the calculation results of CITALDI diffusion code in 2-D R-Z reactor geometry. Cell calculation code WIMS/D4 using spherical model and four neutron energy group was employed to generate group constants of fuel zone, mixing zone and reflector. The first criticality of ASTRA is expected at the core height of 259 cm. Compared to MCNP-4A and SRA C95-CITATION calculations, the relative differences are about 4.43% and 12.61% respectively
Benchmark Calculations of Noncovalent Interactions of Halogenated Molecules
Czech Academy of Sciences Publication Activity Database
Řezáč, Jan; Riley, Kevin Eugene; Hobza, Pavel
2012-01-01
Roč. 8, č. 11 (2012), s. 4285-4292 ISSN 1549-9618 R&D Projects: GA ČR GBP208/12/G016 Institutional support: RVO:61388963 Keywords : halogenated molecules * noncovalent interactions * benchmark calculations Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 5.389, year: 2012
The ORSphere Benchmark Evaluation and Its Potential Impact on Nuclear Criticality Safety
Energy Technology Data Exchange (ETDEWEB)
John D. Bess; Margaret A. Marshall; J. Blair Briggs
2013-10-01
In the early 1970’s, critical experiments using an unreflected metal sphere of highly enriched uranium (HEU) were performed with the focus to provide a “very accurate description…as an ideal benchmark for calculational methods and cross-section data files.” Two near-critical configurations of the Oak Ridge Sphere (ORSphere) were evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook). The results from those benchmark experiments were then compared with additional unmoderated and unreflected HEU metal benchmark experiment configurations currently found in the ICSBEP Handbook. For basic geometries (spheres, cylinders, and slabs) the eigenvalues calculated using MCNP5 and ENDF/B-VII.0 were within 3 of their respective benchmark values. There appears to be generally a good agreement between calculated and benchmark values for spherical and slab geometry systems. Cylindrical geometry configurations tended to calculate low, including more complex bare HEU metal systems containing cylinders. The ORSphere experiments do not calculate within their 1s uncertainty and there is a possibility that the effect of the measured uncertainties for the GODIVA I benchmark may need reevaluated. There is significant scatter in the calculations for the highly-correlated ORCEF cylinder experiments, which are constructed from close-fitting HEU discs and annuli. Selection of a nuclear data library can have a larger impact on calculated eigenvalue results than the variation found within calculations of a given experimental series, such as the ORCEF cylinders, using a single nuclear data set.
Benchmark calculations of thermal reaction rates. I - Quantal scattering theory
Chatfield, David C.; Truhlar, Donald G.; Schwenke, David W.
1991-01-01
The thermal rate coefficient for the prototype reaction H + H2 yields H2 + H with zero total angular momentum is calculated by summing, averaging, and numerically integrating state-to-state reaction probabilities calculated by time-independent quantum-mechanical scattering theory. The results are very carefully converged with respect to all numerical parameters in order to provide high-precision benchmark results for confirming the accuracy of new methods and testing their efficiency.
Benchmarking of HEU Mental Annuli Critical Assemblies with Internally Reflected Graphite Cylinder
Energy Technology Data Exchange (ETDEWEB)
Xiaobo, Liu; Bess, John D.; Marshall, Margaret A.
2016-09-01
Three experimental configurations of critical assemblies, performed in 1963 at the Oak Ridge Critical Experiment Facility, which are assembled using three different diameter HEU annuli (15-9 inches, 15-7 inches and 13-7 inches) metal annuli with internally reflected graphite cylinder are evaluated and benchmarked. The experimental uncertainties which are 0.00055, 0.00055 and 0.00055 respectively, and biases to the detailed benchmark models which are -0.00179, -0.00189 and -0.00114 respectively, were determined, and the experimental benchmark keff results were obtained for both detailed and simplified model. The calculation results for both detailed and simplified models using MCNP6-1.0 and ENDF VII.1 agree well to the benchmark experimental results with a difference of less than 0.2%. These are acceptable benchmark experiments for inclusion in the ICSBEP Handbook.
Los Alamos benchmarks: calculations based on ENDF/B-V data
International Nuclear Information System (INIS)
Kidman, R.B.
1981-11-01
The new and revised benchmark specifications for nine Los Alamos National Laboratory critical assemblies are used to compute the entire set of parameters that were measured in the experiments. A comparison between the computed and experimental values provides a measure of the adequacy of the specifications, cross sections, and physics codes used in the calculations
HTGR dissolver criticality scoping calculation
International Nuclear Information System (INIS)
Shaffer, C.J.
1977-01-01
A criticality scoping calculation was performed for a dissolver designed to dissolve HTGR fuels. The calculation shows the dissolver to go critical at an H/x (hydrogen-to-fuel ratio) of about 34 and peak with a k-effective of 1.18 at an H/x of about 180
Energy Technology Data Exchange (ETDEWEB)
J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama
2008-09-01
Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.
International Nuclear Information System (INIS)
J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama
2008-01-01
Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR-06 are highlighted, and the future of the two projects is discussed
Benchmark density functional theory calculations for nanoscale conductance
DEFF Research Database (Denmark)
Strange, Mikkel; Bækgaard, Iben Sig Buur; Thygesen, Kristian Sommer
2008-01-01
We present a set of benchmark calculations for the Kohn-Sham elastic transmission function of five representative single-molecule junctions. The transmission functions are calculated using two different density functional theory methods, namely an ultrasoft pseudopotential plane-wave code...... in combination with maximally localized Wannier functions and the norm-conserving pseudopotential code SIESTA which applies an atomic orbital basis set. All calculations have been converged with respect to the supercell size and the number of k(parallel to) points in the surface plane. For all systems we find...
JNC results of BN-600 benchmark calculation (phase 4)
International Nuclear Information System (INIS)
Ishikawa, Makoto
2003-01-01
The present work is the results of JNC, Japan, for the Phase 4 of the BN-600 core benchmark problem (Hex-Z fully MOX fuelled core model) organized by IAEA. The benchmark specification is based on 1) the RCM report of IAEA CRP on 'Updated Codes and Methods to Reduce the Calculational Uncertainties of LMFR Reactivity Effects, Action 3.12' (Calculations for BN-600 fully fuelled MOX core for subsequent transient analyses). JENDL-3.2 nuclear data library was used for calculating 70 group ABBN-type group constants. Cell models for fuel assembly and control rod calculations were applied: homogeneous and heterogeneous (cylindrical supercell) model. Basic diffusion calculation was three-dimensional Hex-Z model, 18 group (Citation code). Transport calculations were 18 group, three-dimensional (NSHEC code) based on Sn-transport nodal method developed at JNC. The generated thermal power per fission was based on Sher's data corrected on the basis of ENDF/B-IV data library. Calculation results are presented in Tables for intercomparison
Providing Nuclear Criticality Safety Analysis Education through Benchmark Experiment Evaluation
Energy Technology Data Exchange (ETDEWEB)
John D. Bess; J. Blair Briggs; David W. Nigg
2009-11-01
One of the challenges that today's new workforce of nuclear criticality safety engineers face is the opportunity to provide assessment of nuclear systems and establish safety guidelines without having received significant experience or hands-on training prior to graduation. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and/or the International Reactor Physics Experiment Evaluation Project (IRPhEP) provides students and young professionals the opportunity to gain experience and enhance critical engineering skills.
Monte Carlo benchmark calculations for 400MWTH PBMR core
International Nuclear Information System (INIS)
Kim, H. C.; Kim, J. K.; Kim, S. Y.; Noh, J. M.
2007-01-01
A large interest in high-temperature gas-cooled reactors (HTGR) has been initiated in connection with hydrogen production in recent years. In this study, as a part of work for establishing Monte Carlo computation system for HTGR core analysis, some benchmark calculations for pebble-type HTGR were carried out using MCNP5 code. The core of the 400MW t h Pebble-bed Modular Reactor (PBMR) was selected as a benchmark model. Recently, the IAEA CRP5 neutronics and thermal-hydraulics benchmark problem was proposed for the testing of existing methods for HTGRs to analyze the neutronics and thermal-hydraulic behavior for the design and safety evaluations of the PBMR. This study deals with the neutronic benchmark problems, for fresh fuel and cold conditions (Case F-1), and first core loading with given number densities (Case F-2), proposed for PBMR. After the detailed MCNP modeling of the whole facility, benchmark calculations were performed. Spherical fuel region of a fuel pebble is divided into cubic lattice element in order to model a fuel pebble which contains, on average, 15000 CFPs (Coated Fuel Particles). Each element contains one CFP at its center. In this study, the side length of each cubic lattice element to have the same amount of fuel was calculated to be 0.1635 cm. The remaining volume of each lattice element was filled with graphite. All of different 5 concentric shells of CFP were modeled. The PBMR annular core consists of approximately 452000 pebbles in the benchmark problems. In Case F-1 where the core was filled with only fresh fuel pebble, a BCC(body-centered-cubic) lattice model was employed in order to achieve the random packing core with the packing fraction of 0.61. The BCC lattice was also employed with the size of the moderator pebble increased in a manner that reproduces the specified F/M ratio of 1:2 while preserving the packing fraction of 0.61 in Case F-2. The calculations were pursued with ENDF/B-VI cross-section library and used sab2002 S(α,
Use of benchmark criticals in fast reactor code validation
International Nuclear Information System (INIS)
Curtis, R.; Kelber, C.; Luck, L.; Smith, L.R.
1980-01-01
The problem discussed is how to check the accuracy of SIMMER code used for the analysis of hypothetical core disruptive accidents. A three-step process is used for code checking: Benchmark criticals in ZPR-9; Monte Carlo analog calculations to isolate errors arising from cross-section data and to establish a secondary standard; and comparison between the secondary standard, SIMMER neutronics, and other transport approximations, for configurations of interest. The VIM Monte Carlo Code is used as such a secondary standard. The analysis with VIM of the experiments in ZPR-9 using ENDF-B/IV cross-section data yields the following conclusions: (1) A systematic change in bias exists in the analysis going from a reference configuration to a slumped configuration. This change is larger than β and must be attributed to errors in cross-section data, since the Monte Carlo simulation reproduces every significant detail of the experiment. (2) Transport (SN) calculations show the same trends in the bias as the Monte Carlo studies. Thus, the processes used in the construction of group cross-sections appear adequate. Further, the SN-VIM agreement appears to argue against gross errors in code or input. (3) Comparison with diffusion theory (using the same cross-section set) indicates that conventional diffusion theory has an opposite change in bias. (4) The change in bias in calculating the reactivity worth of slumped fuel is dramatic: transport theory overpredicts positive worths while diffusion theory underpredicts. Thus, reactivity ramp rates at prompt critical may be substantially underpredicted if there has been substantial fuel or coolant movement and diffusion theory has been used
239Pu Resonance Evaluation for Thermal Benchmark System Calculations
Leal, L. C.; Noguere, G.; de Saint Jean, C.; Kahler, A. C.
2014-04-01
Analyses of thermal plutonium solution critical benchmark systems have indicated a deficiency in the 239Pu resonance evaluation. To investigate possible solutions to this issue, the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) Working Party for Evaluation Cooperation (WPEC) established Subgroup 34 to focus on the reevaluation of the 239Pu resolved resonance parameters. In addition, the impacts of the prompt neutron multiplicity (νbar) and the prompt neutron fission spectrum (PFNS) have been investigated. The objective of this paper is to present the results of the 239Pu resolved resonance evaluation effort.
The International Criticality Safety Benchmark Evaluation Project on the Internet
International Nuclear Information System (INIS)
Briggs, J.B.; Brennan, S.A.; Scott, L.
2000-01-01
The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in October 1992 by the US Department of Energy's (DOE's) defense programs and is documented in the Transactions of numerous American Nuclear Society and International Criticality Safety Conferences. The work of the ICSBEP is documented as an Organization for Economic Cooperation and Development (OECD) handbook, International Handbook of Evaluated Criticality Safety Benchmark Experiments. The ICSBEP Internet site was established in 1996 and its address is http://icsbep.inel.gov/icsbep. A copy of the ICSBEP home page is shown in Fig. 1. The ICSBEP Internet site contains the five primary links. Internal sublinks to other relevant sites are also provided within the ICSBEP Internet site. A brief description of each of the five primary ICSBEP Internet site links is given
AGING FACILITY CRITICALITY SAFETY CALCULATIONS
International Nuclear Information System (INIS)
C.E. Sanders
2004-01-01
The purpose of this design calculation is to revise and update the previous criticality calculation for the Aging Facility (documented in BSC 2004a). This design calculation will also demonstrate and ensure that the storage and aging operations to be performed in the Aging Facility meet the criticality safety design criteria in the ''Project Design Criteria Document'' (Doraswamy 2004, Section 4.9.2.2), and the functional nuclear criticality safety requirement described in the ''SNF Aging System Description Document'' (BSC [Bechtel SAIC Company] 2004f, p. 3-12). The scope of this design calculation covers the systems and processes for aging commercial spent nuclear fuel (SNF) and staging Department of Energy (DOE) SNF/High-Level Waste (HLW) prior to its placement in the final waste package (WP) (BSC 2004f, p. 1-1). Aging commercial SNF is a thermal management strategy, while staging DOE SNF/HLW will make loading of WPs more efficient (note that aging DOE SNF/HLW is not needed since these wastes are not expected to exceed the thermal limits form emplacement) (BSC 2004f, p. 1-2). The description of the changes in this revised document is as follows: (1) Include DOE SNF/HLW in addition to commercial SNF per the current ''SNF Aging System Description Document'' (BSC 2004f). (2) Update the evaluation of Category 1 and 2 event sequences for the Aging Facility as identified in the ''Categorization of Event Sequences for License Application'' (BSC 2004c, Section 7). (3) Further evaluate the design and criticality controls required for a storage/aging cask, referred to as MGR Site-specific Cask (MSC), to accommodate commercial fuel outside the content specification in the Certificate of Compliance for the existing NRC-certified storage casks. In addition, evaluate the design required for the MSC that will accommodate DOE SNF/HLW. This design calculation will achieve the objective of providing the criticality safety results to support the preliminary design of the Aging
Benchmark Kantorovich calculations for three particles on a line
Energy Technology Data Exchange (ETDEWEB)
Chuluunbaatar, O [Joint Institute for Nuclear Research, Dubna, Moscow region 141980 (Russian Federation); Gusev, A A [Joint Institute for Nuclear Research, Dubna, Moscow region 141980 (Russian Federation); Kaschiev, M S [Institute of Mathematics and Informatics, Sofia (Bulgaria); Kaschieva, V A [Department of Mathematics, Technical University, Sofia (Bulgaria); Amaya-Tapia, A [Centro de Ciencias Fisicas, UNAM, Cuernavaca, Morelos (Mexico); Larsen, S Y [Temple University, Philadelphia (United States); Vinitsky, S I [Joint Institute for Nuclear Research, Dubna, Moscow region 141980 (Russian Federation)
2006-01-28
A Kantorovich approach is used to solve for the eigenvalue and the scattering properties associated with a multi-dimensional Schroedinger equation. It is developed within the framework of a conventional finite element representation of solutions over a hyperspherical coordinate space. Convergence and efficiency of the proposed schemes are demonstrated in the case of an exactly solvable 'benchmark' model of three identical particles on a line, with zero-range attractive pair potentials and below the three-body threshold. In this model all the 'effective' potentials, and 'coupling matrix elements', of the set of resulting close-coupling radial equations, are calculated using analytical formulae. Variational formulations are developed for both the bound-state energy and the elastic scattering problem. The corresponding numerical schemes are devised using a finite element method of high order accuracy.
AGING FACILITY CRITICALITY SAFETY CALCULATIONS
Energy Technology Data Exchange (ETDEWEB)
C.E. Sanders
2004-09-10
The purpose of this design calculation is to revise and update the previous criticality calculation for the Aging Facility (documented in BSC 2004a). This design calculation will also demonstrate and ensure that the storage and aging operations to be performed in the Aging Facility meet the criticality safety design criteria in the ''Project Design Criteria Document'' (Doraswamy 2004, Section 4.9.2.2), and the functional nuclear criticality safety requirement described in the ''SNF Aging System Description Document'' (BSC [Bechtel SAIC Company] 2004f, p. 3-12). The scope of this design calculation covers the systems and processes for aging commercial spent nuclear fuel (SNF) and staging Department of Energy (DOE) SNF/High-Level Waste (HLW) prior to its placement in the final waste package (WP) (BSC 2004f, p. 1-1). Aging commercial SNF is a thermal management strategy, while staging DOE SNF/HLW will make loading of WPs more efficient (note that aging DOE SNF/HLW is not needed since these wastes are not expected to exceed the thermal limits form emplacement) (BSC 2004f, p. 1-2). The description of the changes in this revised document is as follows: (1) Include DOE SNF/HLW in addition to commercial SNF per the current ''SNF Aging System Description Document'' (BSC 2004f). (2) Update the evaluation of Category 1 and 2 event sequences for the Aging Facility as identified in the ''Categorization of Event Sequences for License Application'' (BSC 2004c, Section 7). (3) Further evaluate the design and criticality controls required for a storage/aging cask, referred to as MGR Site-specific Cask (MSC), to accommodate commercial fuel outside the content specification in the Certificate of Compliance for the existing NRC-certified storage casks. In addition, evaluate the design required for the MSC that will accommodate DOE SNF/HLW. This design calculation will achieve the objective of providing the
TMI criticality studies: lower vessel rubble and analytical benchmarking
International Nuclear Information System (INIS)
Westfall, R.M.; Knight, J.R.; Fox, P.B.; Hermann, O.W.; Turner, J.C.
1986-05-01
A bounding strategy has been adopted for assuring subcriticality during all TMI-2 defueling operations. The strategy is based upon establishing a safe soluble boron level for the entire reactor core in an optimum reactivity configuration. This paper presents the determination of a fuel rubble model which yields a maximum infinite lattice multiplication factor and the subsequent application of cell-averaged constants in finite system analyses. Included in the analyses are the effects of fuel burnup determined from a simplified power history of the reactor. A discussion of the analytical methods employed and the determination of an analytical bias with benchmark critical experiments completes the presentation. 14 refs., 17 tabs
MCNP benchmark analyses of critical experiments for the Space Nuclear Thermal Propulsion program
Selcow, Elizabeth C.; Cerbone, Ralph J.; Ludewig, Hans; Mughabghab, Said F.; Schmidt, Eldon; Todosow, Michael; Parma, Edward J.; Ball, Russell M.; Hoovler, Gary S.
1993-01-01
Benchmark analyses have been performed of Particle Bed Reactor (PBR) critical experiments (CX) using the MCNP radiation transport code. The experiments have been conducted at the Sandia National Laboratory reactor facility in support of the Space Nuclear Thermal Propulsion (SNTP) program. The test reactor is a nineteen element water moderated and reflected thermal system. A series of integral experiments have been carried out to test the capabilities of the radiation transport codes to predict the performance of PBR systems. MCNP was selected as the preferred radiation analysis tool for the benchmark experiments. Comparison between experimental and calculational results indicate close agreement. This paper describes the analyses of benchmark experiments designed to quantify the accuracy of the MCNP radiation transport code for predicting the performance characteristics of PBR reactors.
MCNP benchmark analyses of critical experiments for the Space Nuclear Thermal Propulsion program
International Nuclear Information System (INIS)
Selcow, E.C.; Cerbone, R.J.; Ludewig, H.; Mughabghab, S.F.; Schmidt, E.; Todosow, M.; Parma, E.J.; Ball, R.M.; Hoovler, G.S.
1993-01-01
Benchmark analyses have been performed of Particle Bed Reactor (PBR) critical experiments (CX) using the MCNP radiation transport code. The experiments have been conducted at the Sandia National Laboratory reactor facility in support of the Space Nuclear Thermal Propulsion (SNTP) program. The test reactor is a nineteen element water moderated and reflected thermal system. A series of integral experiments have been carried out to test the capabilities of the radiation transport codes to predict the performance of PBR systems. MCNP was selected as the preferred radiation analysis tool for the benchmark experiments. Comparison between experimental and calculational results indicate close agreement. This paper describes the analyses of benchmark experiments designed to quantify the accuracy of the MCNP radiation transport code for predicting the performance characteristics of PBR reactors
Calculation of WWER-440 nuclide benchmark (CB2)
International Nuclear Information System (INIS)
Prodanova, R
2005-01-01
The present paper is intended to show the results, obtained at the INRNE, Sofia, Bulgaria on the benchmark task, announced by L. Markova at the sixth Symposium of AE, Kirkkonummi Finland 1996 (Authors)
Benchmark Evaluation of the Medium-Power Reactor Experiment Program Critical Configurations
Energy Technology Data Exchange (ETDEWEB)
Margaret A. Marshall; John D. Bess
2013-02-01
A series of small, compact critical assembly (SCCA) experiments were performed in 1962-1965 at the Oak Ridge National Laboratory Critical Experiments Facility (ORCEF) for the Medium-Power Reactor Experiment (MPRE) program. The MPRE was a stainless-steel clad, highly enriched uranium (HEU)-O2 fuelled, BeO reflected reactor design to provide electrical power to space vehicles. Cooling and heat transfer were to be achieved by boiling potassium in the reactor core and passing vapor directly through a turbine. Graphite- and beryllium-reflected assemblies were constructed at ORCEF to verify the critical mass, power distribution, and other reactor physics measurements needed to validate reactor calculations and reactor physics methods. The experimental series was broken into three parts, with the third portion of the experiments representing the beryllium-reflected measurements. The latter experiments are of interest for validating current reactor design efforts for a fission surface power reactor. The entire series has been evaluated as acceptable benchmark experiments and submitted for publication in the International Handbook of Evaluated Criticality Safety Benchmark Experiments and in the International Handbook of Evaluated Reactor Physics Benchmark Experiments.
An improved benchmark model for the Big Ten critical assembly - 021
International Nuclear Information System (INIS)
Mosteller, R.D.
2010-01-01
A new benchmark specification is developed for the BIG TEN uranium critical assembly. The assembly has a fast spectrum, and its core contains approximately 10 wt.% enriched uranium. Detailed specifications for the benchmark are provided, and results from the MCNP5 Monte Carlo code using a variety of nuclear-data libraries are given for this benchmark and two others. (authors)
Criticality experiments to provide benchmark data on neutron flux traps
International Nuclear Information System (INIS)
Bierman, S.R.
1988-06-01
The experimental measurements covered by this report were designed to provide benchmark type data on water moderated LWR type fuel arrays containing neutron flux traps. The experiments were performed at the US Department of Energy Hanford Critical Mass Laboratory, operated by Pacific Northwest Laboratory. The experimental assemblies consisted of 2 /times/ 2 arrays of 4.31 wt % 235 U enriched UO 2 fuel rods, uniformly arranged in water on a 1.891 cm square center-to-center spacing. Neutron flux traps were created between the fuel units using metal plates containing varying amounts of boron. Measurements were made to determine the effect that boron loading and distance between the fuel and flux trap had on the amount of fuel required for criticality. Also, measurements were made, using the pulse neutron source technique, to determine the effect of boron loading on the effective neutron multiplications constant. On two assemblies, reaction rate measurements were made using solid state track recorders to determine absolute fission rates in 235 U and 238 U. 14 refs., 12 figs., 7 tabs
Stationary PWR-calculations by means of LWRSIM at the NEACRP 3D-LWRCT benchmark
International Nuclear Information System (INIS)
Van de Wetering, T.F.H.
1993-01-01
Within the framework of participation in an international benchmark, calculations were executed by means of an adjusted version of the computer code Light Water Reactor SIMulation (LWRSIM) for three-dimensional reactor core calculations of pressurized water reactors. The 3-D LWR Core Transient Benchmark was set up aimed at the comparison of 3-D computer codes for transient calculations in LWRs. Participation in the benchmark provided more insight in the accuracy of the code when applied for other pressurized water reactors than applied for the nuclear power plant Borssele in the Netherlands, for which the code has been developed and used originally
Analysis and evaluation of critical experiments for validation of neutron transport calculations
International Nuclear Information System (INIS)
Bazzana, S.; Blaumann, H; Marquez Damian, J.I
2009-01-01
The calculation schemes, computational codes and nuclear data used in neutronic design require validation to obtain reliable results. In the nuclear criticality safety field this reliability also translates into a higher level of safety in procedures involving fissile material. The International Criticality Safety Benchmark Evaluation Project is an OECD/NEA activity led by the United States, in which participants from over 20 countries evaluate and publish criticality safety benchmarks. The product of this project is a set of benchmark experiment evaluations that are published annually in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. With the recent participation of Argentina, this information is now available for use by the neutron calculation and criticality safety groups in Argentina. This work presents the methodology used for the evaluation of experimental data, some results obtained by the application of these methods, and some examples of the data available in the Handbook. [es
Preparation of a criticality benchmark based on experiments performed at the RA-6 reactor
International Nuclear Information System (INIS)
Bazzana, S.; Blaumann, H; Marquez Damian, J.I
2009-01-01
The operation and fuel management of a reactor uses neutronic modeling to predict its behavior in operational and accidental conditions. This modeling uses computational tools and nuclear data that must be contrasted against benchmark experiments to ensure its accuracy. These benchmarks have to be simple enough to be possible to model with the desired computer code and have quantified and bound uncertainties. The start-up of the RA-6 reactor, final stage of the conversion and renewal project, allowed us to obtain experimental results with fresh fuel. In this condition the material composition of the fuel elements is precisely known, which contributes to a more precise modeling of the critical condition. These experimental results are useful to evaluate the precision of the models used to design the core, based on U 3 Si 2 and cadmium wires as burnable poisons, for which no data was previously available. The analysis of this information can be used to validate models for the analysis of similar configurations, which is necessary to follow the operational history of the reactor and perform fuel management. The analysis of the results and the generation of the model were done following the methodology established by International Criticality Safety Benchmark Evaluation Project, which gathers and analyzes experimental data for critical systems. The results were very satisfactory resulting on a value for the multiplication factor of the model of 1.0000 ± 0.0044, and a calculated value of 0.9980 ± 0.0001 using MCNP 5 and ENDF/B-VI. The utilization of as-built dimensions and compositions, and the sensitivity analysis allowed us to review the design calculations and analyze their precision, accuracy and error compensation. [es
Wignall, Jessica A.; Shapiro, Andrew J.; Wright, Fred A.; Woodruff, Tracey J.; Chiu, Weihsueh A.; Guyton, Kathryn Z.
2014-01-01
Background: Benchmark dose (BMD) modeling computes the dose associated with a prespecified response level. While offering advantages over traditional points of departure (PODs), such as no-observed-adverse-effect-levels (NOAELs), BMD methods have lacked consistency and transparency in application, interpretation, and reporting in human health assessments of chemicals. Objectives: We aimed to apply a standardized process for conducting BMD modeling to reduce inconsistencies in model fitting and selection. Methods: We evaluated 880 dose–response data sets for 352 environmental chemicals with existing human health assessments. We calculated benchmark doses and their lower limits [10% extra risk, or change in the mean equal to 1 SD (BMD/L10/1SD)] for each chemical in a standardized way with prespecified criteria for model fit acceptance. We identified study design features associated with acceptable model fits. Results: We derived values for 255 (72%) of the chemicals. Batch-calculated BMD/L10/1SD values were significantly and highly correlated (R2 of 0.95 and 0.83, respectively, n = 42) with PODs previously used in human health assessments, with values similar to reported NOAELs. Specifically, the median ratio of BMDs10/1SD:NOAELs was 1.96, and the median ratio of BMDLs10/1SD:NOAELs was 0.89. We also observed a significant trend of increasing model viability with increasing number of dose groups. Conclusions: BMD/L10/1SD values can be calculated in a standardized way for use in health assessments on a large number of chemicals and critical effects. This facilitates the exploration of health effects across multiple studies of a given chemical or, when chemicals need to be compared, providing greater transparency and efficiency than current approaches. Citation: Wignall JA, Shapiro AJ, Wright FA, Woodruff TJ, Chiu WA, Guyton KZ, Rusyn I. 2014. Standardizing benchmark dose calculations to improve science-based decisions in human health assessments. Environ Health
Benchmark criticality experiments for fast fission configuration with high enriched nuclear fuel
International Nuclear Information System (INIS)
Sikorin, S.N.; Mandzik, S.G.; Polazau, S.A.; Hryharovich, T.K.; Damarad, Y.V.; Palahina, Y.A.
2014-01-01
Benchmark criticality experiments of fast heterogeneous configuration with high enriched uranium (HEU) nuclear fuel were performed using the 'Giacint' critical assembly of the Joint Institute for Power and Nuclear Research - Sosny (JIPNR-Sosny) of the National Academy of Sciences of Belarus. The critical assembly core comprised fuel assemblies without a casing for the 34.8 mm wrench. Fuel assemblies contain 19 fuel rods of two types. The first type is metal uranium fuel rods with 90% enrichment by U-235; the second one is dioxide uranium fuel rods with 36% enrichment by U-235. The total fuel rods length is 620 mm, and the active fuel length is 500 mm. The outer fuel rods diameter is 7 mm, the wall is 0.2 mm thick, and the fuel material diameter is 6.4 mm. The clad material is stainless steel. The side radial reflector: the inner layer of beryllium, and the outer layer of stainless steel. The top and bottom axial reflectors are of stainless steel. The analysis of the experimental results obtained from these benchmark experiments by developing detailed calculation models and performing simulations for the different experiments is presented. The sensitivity of the obtained results for the material specifications and the modeling details were examined. The analyses used the MCNP and MCU computer programs. This paper presents the experimental and analytical results. (authors)
OECD/NEA burnup credit criticality benchmark. Result of phase IIA
Energy Technology Data Exchange (ETDEWEB)
Takano, Makoto; Okuno, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1996-02-01
The report describes the final result of the Phase IIA of the Burnup Credit Criticality Benchmark conducted by OECD/NEA. In the Phase IIA benchmark problems, the effect of an axial burnup profile of PWR spent fuels on criticality (end effect) has been studied. The axial profiles at 10, 30 and 50 GWd/t burnup have been considered. In total, 22 results from 18 institutes of 10 countries have been submitted. The calculated multiplication factors from the participants have lain within the band of {+-} 1% {Delta}k. For the irradiation up to 30 GWd/t, the end effect has been found to be less than 1.0% {Delta}k. But, for the 50 GWd/t case, the effect is more than 4.0% {Delta}k when both actinides and FPs are taken into account, whereas it remains less than 1.0% {Delta}k when only actinides are considered. The fission density data have indicated the importance end regions have in the criticality safety analysis of spent fuel systems. (author).
Benchmark models and experimental data for a U(20) polyethylene-moderated critical system
Energy Technology Data Exchange (ETDEWEB)
Wetzel, Larry [Babcock & Wilcox Nuclear Operations Group Inc.; Busch, Robert D. [University of New Mexico, Albuquerque; Bowen, Douglas G [ORNL
2015-01-01
This work involves the analysis of recent experiments performed on the Aerojet General Nucleonics (AGN)-201M (AGN) polyethylene-moderated research reactor at the University of New Mexico (UNM). The experiments include 36 delayed critical (DC) configurations and 11 positive-period and rod-drop measurements (transient sequences). The Even Parity Neutron Transport (EVENT) radiation transport code was chosen to analyze these steady state and time-dependent experimental configurations. The UNM AGN specifications provided in a benchmark calculation report (2007) were used to initiate AGN EVENT model development and to test the EVENT AGN calculation methodology. The results of the EVENT DC experimental analyses compared well with the experimental data; the average AGN EVENT calculation bias in the k_{eff} is –0.0048% for the Legrendre Flux Expansion Order of 11 (P_{11}) cases and +0.0119% for the P_{13} cases. The EVENT transient analysis also compared well with the AGN experimental data with respect to predicting the reactor period and control rod worth values. This paper discusses the benchmark models used, the recent experimental configurations, and the EVENT experimental analysis.
Benchmark calculations on residue production within the EURISOL DS project; Part I: thin targets
David, J.C; Boudard, A; Doré, D; Leray, S; Rapp, B; Ridikas, D; Thiollière, N
Report on benchmark calculations on residue production in thin targets. Calculations were performed using MCNPX 2.5.0 coupled to a selection of reaction models. The results were compared to nuclide production cross-sections measured in GSI in inverse kinematics
Benchmark calculations on residue production within the EURISOL DS project; Part II: thick targets
David, J.-C; Boudard, A; Doré, D; Leray, S; Rapp, B; Ridikas, D; Thiollière, N
Benchmark calculations on residue production using MCNPX 2.5.0. Calculations were compared to mass-distribution data for 5 different elements measured at ISOLDE, and to specific activities of 28 radionuclides in different places along the thick target measured in Dubna.
JNC results of BFS-62-3A benchmark calculation (CRP: Phase 5)
International Nuclear Information System (INIS)
Ishikawa, M.
2004-01-01
The present work is the results of JNC, Japan, for the Phase 5 of IAEA CRP benchmark problem (BFS-62-3A critical experiment). Analytical Method of JNC is based on Nuclear Data Library JENDL-3.2; Group Constant Set JFS-3-J3.2R: 70-group, ABBN-type self-shielding factor table based on JENDL-3.2; Effective Cross-section - Current-weighted multigroup transport cross-section. Cell model for the BFS as-built tube and pellets was (Case 1) Homogeneous Model based on IPPE definition; (Case 2) Homogeneous atomic density equivalent to JNC's heterogeneous calculation only to cross-check the adjusted correction factors; (Case 3) Heterogeneous model based on JNC's evaluation, One-dimensional plate-stretch model with Tone's background cross-section method (CASUP code). Basic diffusion Calculation was done in 18-groups and three-dimensional Hex-Z model (by the CITATION code), with Isotropic diffusion coefficients (Case 1 and 2), and Benoist's anisotropic diffusion coefficients (Case 3). For sodium void reactivity, the exact perturbation theory was applied both to basic calculation and correction calculations, ultra-fine energy group correction - approx. 100,000 group constants below 50 keV, and ABBN-type 175 group constants with shielding factors above 50 keV. Transport theory and mesh size correction 18-group, was used for three-dimensional Hex-Z model (the MINIHEX code based on the S4-P0 transport method, which was developed by JNC. Effective delayed Neutron fraction in the reactivity scale was fixed at 0.00623 by IPPE evaluation. Analytical Results of criticality values and sodium void reactivity coefficient obtained by JNC are presented. JNC made a cross-check of the homogeneous model and the adjusted correction factors submitted by IPPE, and confirmed they are consistent. JNC standard system showed quite satisfactory analytical results for the criticality and the sodium void reactivity of BFS-62-3A experiment. JNC calculated the cross-section sensitivity coefficients of BFS
Benchmark calculations for evaluation methods of gas volumetric leakage rate
International Nuclear Information System (INIS)
Asano, R.; Aritomi, M.; Matsuzaki, M.
1998-01-01
A containment function of radioactive materials transport casks is essential for safe transportation to prevent the radioactive materials from being released into environment. Regulations such as IAEA standard determined the limit of radioactivity to be released. Since is not practical for the leakage tests to measure directly the radioactivity release from a package, as gas volumetric leakages rates are proposed in ANSI N14.5 and ISO standards. In our previous works, gas volumetric leakage rates for several kinds of gas from various leaks were measured and two evaluation methods, 'a simple evaluation method' and 'a strict evaluation method', were proposed based on the results. The simple evaluation method considers the friction loss of laminar flow with expansion effect. The strict evaluating method considers an exit loss in addition to the friction loss. In this study, four worked examples were completed for on assumed large spent fuel transport cask (Type B Package) with wet or dry capacity and at three transport conditions; normal transport with intact fuels or failed fuels, and an accident in transport. The standard leakage rates and criteria for two kinds of leak test were calculated for each example by each evaluation method. The following observations are made based upon the calculations and evaluations: the choked flow model of ANSI method greatly overestimates the criteria for tests ; the laminar flow models of both ANSI and ISO methods slightly overestimate the criteria for tests; the above two results are within the design margin for ordinary transport condition and all methods are useful for the evaluation; for severe condition such as failed fuel transportation, it should pay attention to apply a choked flow model of ANSI method. (authors)
BENCHMARKING UPGRADED HOTSPOT DOSE CALCULATIONS AGAINST MACCS2 RESULTS
Energy Technology Data Exchange (ETDEWEB)
Brotherton, Kevin
2009-04-30
The radiological consequence of interest for a documented safety analysis (DSA) is the centerline Total Effective Dose Equivalent (TEDE) incurred by the Maximally Exposed Offsite Individual (MOI) evaluated at the 95th percentile consequence level. An upgraded version of HotSpot (Version 2.07) has been developed with the capabilities to read site meteorological data and perform the necessary statistical calculations to determine the 95th percentile consequence result. These capabilities should allow HotSpot to join MACCS2 (Version 1.13.1) and GENII (Version 1.485) as radiological consequence toolbox codes in the Department of Energy (DOE) Safety Software Central Registry. Using the same meteorological data file, scenarios involving a one curie release of {sup 239}Pu were modeled in both HotSpot and MACCS2. Several sets of release conditions were modeled, and the results compared. In each case, input parameter specifications for each code were chosen to match one another as much as the codes would allow. The results from the two codes are in excellent agreement. Slight differences observed in results are explained by algorithm differences.
Benchmark Calculations for Electron Collisions with Complex Atoms
International Nuclear Information System (INIS)
Zatsarinny, Oleg; Bartschat, Klaus
2014-01-01
The B-spline R-matrix (BSR) approach [1,2] is based on the non-perturbative close-coupling method. As such it is, in principle, based on an exact expansion of the solution of the time-independent Schrödinger equation, as an infinite sum/integral of N-electron target states coupled to the wave function of the scattering projectile. The N-electron target states, again, can in principle be calculated with almost arbitrary accuracy using sufficiently large configuration-interaction expansions and the correct interaction hamiltonian. In practice, of course, the infinite expansions have to be cut off in some way and the exact hamiltonian may not be available. In the collision part of the BSR method, the integral over the ionization continuum and the infinite sum over high-lying Rydberg states are replaced by a finite sum over square-integrable pseudo-states. Also, a number of inner shells are treated as (partially) inert, i.e., a minimum number of electrons are required in those subshells.
Calculation of Critical Temperatures by Empirical Formulae
Directory of Open Access Journals (Sweden)
Trzaska J.
2016-06-01
Full Text Available The paper presents formulas used to calculate critical temperatures of structural steels. Equations that allow calculating temperatures Ac1, Ac3, Ms and Bs were elaborated based on the chemical composition of steel. To elaborate the equations the multiple regression method was used. Particular attention was paid to the collection of experimental data which was required to calculate regression coefficients, including preparation of data for calculation. The empirical data set included more than 500 chemical compositions of structural steel and has been prepared based on information available in literature on the subject.
D.C. Blitz (David)
2011-01-01
textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.
Evaluation and validation of criticality codes for fuel dissolver calculations
International Nuclear Information System (INIS)
Santamarina, A.; Smith, H.J.; Whitesides, G.E.
1991-01-01
During the past ten years an OECD/NEA Criticality Working Group has examined the validity of criticality safety computational methods. International calculation tools which were shown to be valid in systems for which experimental data existed were demonstrated to be inadequate when extrapolated to fuel dissolver media. A theoretical study of the main physical parameters involved in fuel dissolution calculations was performed, i.e. range of moderation, variation of pellet size and the fuel double heterogeneity effect. The APOLLO/P IC method developed to treat this latter effect permits us to supply the actual reactivity variation with pellet dissolution and to propose international reference values. The disagreement among contributors' calculations was analyzed through a neutron balance breakdown, based on three-group microscopic reaction rates. The results pointed out that fast and resonance nuclear data in criticality codes are not sufficiently reliable. Moreover the neutron balance analysis emphasized the inadequacy of the standard self-shielding formalism to account for 238 U resonance mutual self-shielding in the pellet-fissile liquor interaction. The benchmark exercise has resolved a potentially dangerous inadequacy in dissolver calculations. (author)
DEFF Research Database (Denmark)
Grandjean, Philippe; Budtz-Joergensen, Esben
2013-01-01
follow-up of a Faroese birth cohort were used. Serum-PFC concentrations were measured at age 5 years, and serum antibody concentrations against tetanus and diphtheria toxoids were obtained at ages 7 years. Benchmark dose results were calculated in terms of serum concentrations for 431 children...
Criticality criteria for submissions based on calculations
International Nuclear Information System (INIS)
Burgess, M.H.
1975-06-01
Calculations used in criticality clearances are subject to errors from various sources, and allowance must be made for these errors is assessing the safety of a system. A simple set of guidelines is defined, drawing attention to each source of error, and recommendations as to its application are made. (author)
The fifth Atomic Energy Research dynamic benchmark calculation with HEXTRAN-SMABRE
International Nuclear Information System (INIS)
Haenaelaeinen, Anitta
1998-01-01
The fifth Atomic Energy Research dynamic benchmark is the first Atomic Energy Research benchmark for coupling of the thermohydraulic codes and three-dimensional reactor dynamic core models. In VTT HEXTRAN 2.7 is used for the core dynamics and SMABRE 4.6 as a thermohydraulic model for the primary and secondary loops. The plant model for SMABRE is based mainly on two input models. the Loviisa model and standard WWER-440/213 plant model. The primary circuit includes six separate loops, totally 505 nodes and 652 junctions. The reactor pressure vessel is divided into six parallel channels. In HEXTRAN calculation 176 symmetry is used in the core. In the sequence of main steam header break at the hot standby state, the liquid temperature is decreased symmetrically in the core inlet which leads to return to power. In the benchmark, no isolations of the steam generators are assumed and the maximum core power is about 38 % of the nominal power at four minutes after the break opening in the HEXTRAN-SMABRE calculation. Due to boric acid in the high pressure safety injection water, the power finally starts to decrease. The break flow is pure steam in the HEXTRAN-SMABRE calculation during the whole transient even in the swell levels in the steam generators are very high due to flashing. Because of sudden peaks in the preliminary results of the steam generator heat transfer, the SMABRE drift-flux model was modified. The new model is a simplified version of the EPRI correlation based on test data. The modified correlation behaves smoothly. In the calculations nuclear data is based on the ENDF/B-IV library and it has been evaluated with the CASMO-HEX code. The importance of the nuclear data was illustrated by repeating the benchmark calculation with using three different data sets. Optimal extensive data valid from hot to cold conditions were not available for all types of fuel enrichments needed in this benchmark.(Author)
OECD/NEA benchmark for time-dependent neutron transport calculations without spatial homogenization
Energy Technology Data Exchange (ETDEWEB)
Hou, Jason, E-mail: jason.hou@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Ivanov, Kostadin N. [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Boyarinov, Victor F.; Fomichenko, Peter A. [National Research Centre “Kurchatov Institute”, Kurchatov Sq. 1, Moscow (Russian Federation)
2017-06-15
Highlights: • A time-dependent homogenization-free neutron transport benchmark was created. • The first phase, known as the kinetics phase, was described in this work. • Preliminary results for selected 2-D transient exercises were presented. - Abstract: A Nuclear Energy Agency (NEA), Organization for Economic Co-operation and Development (OECD) benchmark for the time-dependent neutron transport calculations without spatial homogenization has been established in order to facilitate the development and assessment of numerical methods for solving the space-time neutron kinetics equations. The benchmark has been named the OECD/NEA C5G7-TD benchmark, and later extended with three consecutive phases each corresponding to one modelling stage of the multi-physics transient analysis of the nuclear reactor core. This paper provides a detailed introduction of the benchmark specification of Phase I, known as the “kinetics phase”, including the geometry description, supporting neutron transport data, transient scenarios in both two-dimensional (2-D) and three-dimensional (3-D) configurations, as well as the expected output parameters from the participants. Also presented are the preliminary results for the initial state 2-D core and selected transient exercises that have been obtained using the Monte Carlo method and the Surface Harmonic Method (SHM), respectively.
A Critical Thinking Benchmark for a Department of Agricultural Education and Studies
Perry, Dustin K.; Retallick, Michael S.; Paulsen, Thomas H.
2014-01-01
Due to an ever changing world where technology seemingly provides endless answers, today's higher education students must master a new skill set reflecting an emphasis on critical thinking, problem solving, and communications. The purpose of this study was to establish a departmental benchmark for critical thinking abilities of students majoring…
Proposal of a benchmark for core burnup calculations for a VVER-1000 reactor core
International Nuclear Information System (INIS)
Loetsch, T.; Khalimonchuk, V.; Kuchin, A.
2009-01-01
In the framework of a project supported by the German BMU the code DYN3D should be further validated and verified. During the work a lack of a benchmark on core burnup calculations for VVER-1000 reactors was noticed. Such a benchmark is useful for validating and verifying the whole package of codes and data libraries for reactor physics calculations including fuel assembly modelling, fuel assembly data preparation, few group data parametrisation and reactor core modelling. The benchmark proposed specifies the core loading patterns of burnup cycles for a VVER-1000 reactor core as well as a set of operational data such as load follow, boron concentration in the coolant, cycle length, measured reactivity coefficients and power density distributions. The reactor core characteristics chosen for comparison and the first results obtained during the work with the reactor physics code DYN3D are presented. This work presents the continuation of efforts of the projects mentioned to estimate the accuracy of calculated characteristics of VVER-1000 reactor cores. In addition, the codes used for reactor physics calculations of safety related reactor core characteristics should be validated and verified for the cases in which they are to be used. This is significant for safety related evaluations and assessments carried out in the framework of licensing and supervision procedures in the field of reactor physics. (authors)
EA-MC Neutronic Calculations on IAEA ADS Benchmark 3.2
Energy Technology Data Exchange (ETDEWEB)
Dahlfors, Marcus [Uppsala Univ. (Sweden). Dept. of Radiation Sciences; Kadi, Yacine [CERN, Geneva (Switzerland). Emerging Energy Technologies
2006-01-15
The neutronics and the transmutation properties of the IAEA ADS benchmark 3.2 setup, the 'Yalina' experiment or ISTC project B-70, have been studied through an extensive amount of 3-D Monte Carlo calculations at CERN. The simulations were performed with the state-of-the-art computer code package EA-MC, developed at CERN. The calculational approach is outlined and the results are presented in accordance with the guidelines given in the benchmark description. A variety of experimental conditions and parameters are examined; three different fuel rod configurations and three types of neutron sources are applied to the system. Reactivity change effects introduced by removal of fuel rods in both central and peripheral positions are also computed. Irradiation samples located in a total of 8 geometrical positions are examined. Calculations of capture reaction rates in {sup 129}I, {sup 237}Np and {sup 243}Am samples and of fission reaction rates in {sup 235}U, {sup 237}Np and {sup 243}Am samples are presented. Simulated neutron flux densities and energy spectra as well as spectral indices inside experimental channels are also given according to benchmark specifications. Two different nuclear data libraries, JAR-95 and JENDL-3.2, are applied for the calculations.
Calculational benchmark comparisons for a low sodium void worth actinide burner core design
International Nuclear Information System (INIS)
Hill, R.N.; Kawashima, M.; Arie, K.; Suzuki, M.
1992-01-01
Recently, a number of low void worth core designs with non-conventional core geometries have been proposed. Since these designs lack a good experimental and computational database, benchmark calculations are useful for the identification of possible biases in performance characteristics predictions. In this paper, a simplified benchmark model of a metal fueled, low void worth actinide burner design is detailed; and two independent neutronic performance evaluations are compared. Calculated performance characteristics are evaluated for three spatially uniform compositions (fresh uranium/plutonium, batch-averaged uranium/transuranic, and batch-averaged uranium/transuranic with fission products) and a regional depleted distribution obtained from a benchmark depletion calculation. For each core composition, the flooded and voided multiplication factor, power peaking factor, sodium void worth (and its components), flooded Doppler coefficient and control rod worth predictions are compared. In addition, the burnup swing, average discharge burnup, peak linear power, and fresh fuel enrichment are calculated for the depletion case. In general, remarkably good agreement is observed between the evaluations. The most significant difference is predicted performance characteristics is a 0.3--0.5% Δk/(kk) bias in the sodium void worth. Significant differences in the transmutation rate of higher actinides are also observed; however, these differences do not cause discrepancies in the performing predictions
International Nuclear Information System (INIS)
Carew, John F.; Finch, Stephen J.; Lois, Lambros
2003-01-01
The calculated >1-MeV pressure vessel fluence is used to determine the fracture toughness and integrity of the reactor pressure vessel. It is therefore of the utmost importance to ensure that the fluence prediction is accurate and unbiased. In practice, this assurance is provided by comparing the predictions of the calculational methodology with an extensive set of accurate benchmarks. A benchmarking database is used to provide an estimate of the overall average measurement-to-calculation (M/C) bias in the calculations ( ). This average is used as an ad-hoc multiplicative adjustment to the calculations to correct for the observed calculational bias. However, this average only provides a well-defined and valid adjustment of the fluence if the M/C data are homogeneous; i.e., the data are statistically independent and there is no correlation between subsets of M/C data.Typically, the identification of correlations between the errors in the database M/C values is difficult because the correlation is of the same magnitude as the random errors in the M/C data and varies substantially over the database. In this paper, an evaluation of a reactor dosimetry benchmark database is performed to determine the statistical validity of the adjustment to the calculated pressure vessel fluence. Physical mechanisms that could potentially introduce a correlation between the subsets of M/C ratios are identified and included in a multiple regression analysis of the M/C data. Rigorous statistical criteria are used to evaluate the homogeneity of the M/C data and determine the validity of the adjustment.For the database evaluated, the M/C data are found to be strongly correlated with dosimeter response threshold energy and dosimeter location (e.g., cavity versus in-vessel). It is shown that because of the inhomogeneity in the M/C data, for this database, the benchmark data do not provide a valid basis for adjusting the pressure vessel fluence.The statistical criteria and methods employed in
Criticality calculation method for mixer-settlers
International Nuclear Information System (INIS)
Gonda, Kozo; Aoyagi, Haruki; Nakano, Ko; Kamikawa, Hiroshi.
1980-01-01
A new criticality calculation code MACPEX has been developed to evaluate and manage the criticality of the process in the extractor of mixer-settler type. MACPEX can perform the combined calculation with the PUREX process calculation code MIXSET, to get the neutron flux and the effective multiplication constant in the mixer-settlers. MACPEX solves one-dimensional diffusion equation by the explicit difference method and the standard source-iteration technique. The characteristics of MACPEX are as follows. 1) Group constants of 4 energy groups for the 239 Pu-H 2 O solution, water, polyethylene and SUS 28 are provided. 2) The group constants of the 239 Pu-H 2 O solution are given by the functional formulae of the plutonium concentration, which is less than 50 g/l. 3) Two boundary conditions of the vacuum condition and the reflective condition are available in this code. 4) The geometrical bucklings can be calculated for a certain energy group and/or region by using the three dimentional neutron flux profiles obtained by CITATION. 5) The buckling correction search can be carried out in order to get a desired k sub(eff). (author)
CANISTER HANDLING FACILITY CRITICALITY SAFETY CALCULATIONS
International Nuclear Information System (INIS)
C.E. Sanders
2005-01-01
This design calculation revises and updates the previous criticality evaluation for the canister handling, transfer and staging operations to be performed in the Canister Handling Facility (CHF) documented in BSC [Bechtel SAIC Company] 2004 [DIRS 167614]. The purpose of the calculation is to demonstrate that the handling operations of canisters performed in the CHF meet the nuclear criticality safety design criteria specified in the ''Project Design Criteria (PDC) Document'' (BSC 2004 [DIRS 171599], Section 4.9.2.2), the nuclear facility safety requirement in ''Project Requirements Document'' (Canori and Leitner 2003 [DIRS 166275], p. 4-206), the functional/operational nuclear safety requirement in the ''Project Functional and Operational Requirements'' document (Curry 2004 [DIRS 170557], p. 75), and the functional nuclear criticality safety requirements described in the ''Canister Handling Facility Description Document'' (BSC 2004 [DIRS 168992], Sections 3.1.1.3.4.13 and 3.2.3). Specific scope of work contained in this activity consists of updating the Category 1 and 2 event sequence evaluations as identified in the ''Categorization of Event Sequences for License Application'' (BSC 2004 [DIRS 167268], Section 7). The CHF is limited in throughput capacity to handling sealed U.S. Department of Energy (DOE) spent nuclear fuel (SNF) and high-level radioactive waste (HLW) canisters, defense high-level radioactive waste (DHLW), naval canisters, multicanister overpacks (MCOs), vertical dual-purpose canisters (DPCs), and multipurpose canisters (MPCs) (if and when they become available) (BSC 2004 [DIRS 168992], p. 1-1). It should be noted that the design and safety analyses of the naval canisters are the responsibility of the U.S. Department of the Navy (Naval Nuclear Propulsion Program) and will not be included in this document. In addition, this calculation is valid for the current design of the CHF and may not reflect the ongoing design evolution of the facility
Energy Technology Data Exchange (ETDEWEB)
William Anderson; James Tulenko; Bradley Rearden; Gary Harms
2008-09-11
The nuclear industry interest in advanced fuel and reactor design often drives towards fuel with uranium enrichments greater than 5 wt% 235U. Unfortunately, little data exists, in the form of reactor physics and criticality benchmarks, for uranium enrichments ranging between 5 and 10 wt% 235U. The primary purpose of this project is to provide benchmarks for fuel similar to what may be required for advanced light water reactors (LWRs). These experiments will ultimately provide additional information for application to the criticality-safety bases for commercial fuel facilities handling greater than 5 wt% 235U fuel.
Criticality calculations with MCNP trademark: A primer
International Nuclear Information System (INIS)
Harmon, C.D. II; Busch, R.D.; Briesmeister, J.F.; Forster, R.A.
1994-01-01
With the closure of many experimental facilities, the nuclear criticality safety analyst increasingly is required to rely on computer calculations to identify safe limits for the handling and storage of fissile materials. However, in many cases, the analyst has little experience with the specific codes available at his/her facility. This primer will help you, the analyst, understand and use the MCNP Monte Carlo code for nuclear criticality safety analyses. It assumes that you have a college education in a technical field. There is no assumption of familiarity with Monte Carlo codes in general or with MCNP in particular. Appendix A gives an introduction to Monte Carlo techniques. The primer is designed to teach by example, with each example illustrating two or three features of MCNP that are useful in criticality analyses. Beginning with a Quickstart chapter, the primer gives an overview of the basic requirements for MCNP input and allows you to run a simple criticality problem with MCNP. This chapter is not designed to explain either the input or the MCNP options in detail; but rather it introduces basic concepts that are further explained in following chapters. Each chapter begins with a list of basic objectives that identify the goal of the chapter, and a list of the individual MCNP features that are covered in detail in the unique chapter example problems. It is expected that on completion of the primer you will be comfortable using MCNP in criticality calculations and will be capable of handling 80 to 90 percent of the situations that normally arise in a facility. The primer provides a set of basic input files that you can selectively modify to fit the particular problem at hand
Criticality Calculations with MCNP6 - Practical Lectures
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications (XCP-3); Rising, Michael Evan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications (XCP-3); Alwin, Jennifer Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications (XCP-3)
2016-11-29
These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input model for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.
Energy Technology Data Exchange (ETDEWEB)
Kahler, A.C.; Herman, M.; Kahler,A.C.; MacFarlane,R.E.; Mosteller,R.D.; Kiedrowski,B.C.; Frankle,S.C.; Chadwick,M.B.; McKnight,R.D.; Lell,R.M.; Palmiotti,G.; Hiruta,H.; Herman,M.; Arcilla,R.; Mughabghab,S.F.; Sublet,J.C.; Trkov,A.; Trumbull,T.H.; Dunn,M.
2011-12-01
The ENDF/B-VII.1 library is the latest revision to the United States Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., 'ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data,' Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected {sup 235}U and {sup 239}Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also
Energy Technology Data Exchange (ETDEWEB)
Kahler, A. [Los Alamos National Laboratory (LANL); Macfarlane, R E [Los Alamos National Laboratory (LANL); Mosteller, R D [Los Alamos National Laboratory (LANL); Kiedrowski, B C [Los Alamos National Laboratory (LANL); Frankle, S C [Los Alamos National Laboratory (LANL); Chadwick, M. B. [Los Alamos National Laboratory (LANL); Mcknight, R D [Argonne National Laboratory (ANL); Lell, R M [Argonne National Laboratory (ANL); Palmiotti, G [Idaho National Laboratory (INL); Hiruta, h [Idaho National Laboratory (INL); Herman, Micheal W [Brookhaven National Laboratory (BNL); Arcilla, r [Brookhaven National Laboratory (BNL); Mughabghab, S F [Brookhaven National Laboratory (BNL); Sublet, J C [Culham Science Center, Abington, UK; Trkov, A. [Jozef Stefan Institute, Slovenia; Trumbull, T H [Knolls Atomic Power Laboratory; Dunn, Michael E [ORNL
2011-01-01
The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [1]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unrnoderated and uranium reflected (235)U and (239)Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as (236)U; (238,242)Pu and (241,243)Am capture in fast systems. Other deficiencies, such as the overprediction of Pu solution system critical
International Nuclear Information System (INIS)
Briggs, J.B.; Nouri, A.; Dean, V.A.F.
2004-01-01
The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) -- Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled 'International Handbook of Evaluated Criticality Safety Benchmark Experiments'. The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. The Handbook is being used extensively for validation of criticality safety methodologies and nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. (author)
Validation of JENDL-3.3 by criticality benchmark testing
International Nuclear Information System (INIS)
Takano, Hideki; Nakagawa, Tsuneo
2001-01-01
In the thermal uranium core, the keff-values of STACY, TRACY and JRR-4 overestimated with JENDL-3.2 were improved significantly by decreasing of about 0.6% with JENDL-3.3. This is due to modification of the fission spectrum and thermal fission cross section of 235 U from JENDL-3.2 to JENDL-3.3 data. For the uranium fast cores, the discrepancies of keff values between JENDL-3.2 and 3.3 were very small. In the thermal Pu cores of TCA, the keff-values calculated with JENDL-3.3 were in good agreement with the experimental values. For Pu fuel cores of ZPPR-9 and FCA-XVII, the keff values calculated with JENDL-3.3 became larger 0.2% than those for JENDL-3.2. In small fast cores with U-233 fuel, the keff-values overestimated with JENDL-3.2 were improved considerably with JENDL-3.3, due to reevaluation of U-233 fission cross sections in the high energy region. (author)
COVE 2A Benchmarking calculations using NORIA; Yucca Mountain Site Characterization Project
Energy Technology Data Exchange (ETDEWEB)
Carrigan, C.R.; Bixler, N.E.; Hopkins, P.L.; Eaton, R.R.
1991-10-01
Six steady-state and six transient benchmarking calculations have been performed, using the finite element code NORIA, to simulate one-dimensional infiltration into Yucca Mountain. These calculations were made to support the code verification (COVE 2A) activity for the Yucca Mountain Site Characterization Project. COVE 2A evaluates the usefulness of numerical codes for analyzing the hydrology of the potential Yucca Mountain site. Numerical solutions for all cases were found to be stable. As expected, the difficulties and computer-time requirements associated with obtaining solutions increased with infiltration rate. 10 refs., 128 figs., 5 tabs.
Primer for criticality calculations with DANTSYS
International Nuclear Information System (INIS)
Busch, R.D.
1996-01-01
With the closure of many experimental facilities, the nuclear criticality safety analyst is increasingly required to rely on computer calculations to identify safe limits for the handling and storage of fissile materials. However, in many cases, the analyst has little experience with the specific codes available at his or her facility. Typically, two types of codes are available: deterministic codes such as ANISN or DANTSYS that solve an approximate model exactly and Monte Carlo Codes such as KENO or MCNP that solve an exact model approximately. Often, the analyst feels that the deterministic codes are too simple and will not provide the necessary information, so most modeling uses Monte Carlo methods. This sometimes means that hours of effort are expended to produce results available in minutes from deterministic codes. A substantial amount of reliable information on nuclear systems can be obtained using deterministic methods if the user understands their limitations. To guide criticality specialists in this area, the Nuclear Criticality Safety Group at the University of New Mexico in cooperation with the Radiation Transport Group at Los Alamos National Laboratory has designed a primer to help the analyst understand and use the DANTSYS deterministic transport code for nuclear criticality safety analyses. (DANTSYS is the name of a suite of codes that users more commonly know as ONEDANT, TWODANT, TWOHEX, and THREEDANT.) It assumes a college education in a technical field, but there is no assumption of familiarity with neutronics codes in general or with DANTSYS in particular. The primer is designed to teach by example, with each example illustrating two or three DANTSYS features useful in criticality analyses
Subgroup Benchmark Calculations for the Intra-Pellet Nonuniform Temperature Cases
Energy Technology Data Exchange (ETDEWEB)
Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jung, Yeon Sang [Seoul National Univ. (Korea, Republic of); Liu, Yuxuan [Univ. of Michigan, Ann Arbor, MI (United States); Joo, Han Gyu [Seoul National Univ. (Korea, Republic of)
2016-08-01
A benchmark suite has been developed by Seoul National University (SNU) for intrapellet nonuniform temperature distribution cases based on the practical temperature profiles according to the thermal power levels. Though a new subgroup capability for nonuniform temperature distribution was implemented in MPACT, no validation calculation has been performed for the new capability. This study focuses on bench-marking the new capability through a code-to-code comparison. Two continuous-energy Monte Carlo codes, McCARD and CE-KENO, are engaged in obtaining reference solutions, and the MPACT results are compared to the SNU nTRACER using a similar cross section library and subgroup method to obtain self-shielded cross sections.
International Nuclear Information System (INIS)
Mitake, Susumu
2003-01-01
Validation of the continuous-energy Monte Carlo criticality-safety analysis system, comprising the MVP code and neutron cross sections based on JENDL-3.2, was examined using benchmarks evaluated in the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments'. Eight experiments (116 configurations) for the plutonium solution and plutonium-uranium mixture systems performed at Valduc, Battelle Pacific Northwest Laboratories, and other facilities were selected and used in the studies. The averaged multiplication factors calculated with MVP and MCNP-4B using the same neutron cross-section libraries based on JENDL-3.2 were in good agreement. Based on methods provided in the Japanese nuclear criticality-safety handbook, the estimated criticality lower-limit multiplication factors to be used as a subcriticality criterion for the criticality-safety evaluation of nuclear facilities were obtained. The analysis proved the applicability of the MVP code to the criticality-safety analysis of nuclear fuel facilities, particularly to the analysis of systems fueled with plutonium and in homogeneous and thermal-energy conditions
Hextran-Smabre calculation of the VVER-1000 coolant transient benchmark
Energy Technology Data Exchange (ETDEWEB)
Elina Syrjaelahti; Anitta Haemaelaeinen [VTT Processes, P.O.Box 1604, FIN-02044 VTT (Finland)
2005-07-01
Full text of publication follows: The VVER-1000 Coolant Transient benchmark is intended for validation of couplings of the thermal hydraulic codes and three dimensional neutron kinetic core models. It concerns a switching on a main coolant pump when the other three main coolant pumps are in operation. Problem is based on experiment performed in Kozloduy NPP in Bulgaria. In addition to the real plant transient, two extreme scenarios concerning control rod ejection after switching on a main coolant pump were calculated. In VTT the three-dimensional advanced nodal code HEXTRAN is used for the core kinetics and dynamics, and thermohydraulic system code SMABRE as a thermal hydraulic model for the primary and secondary loop. Parallelly coupled HEXTRAN-SMABRE code has been in production use since early 90's, and it has been extensively used for analysis of VVER NPPs. The SMABRE input model is based on the standard VVER-1000 input used in VTT. Last plant specific modifications to the input model have been made in EU projects. The whole core calculation is performed in the core with HEXTRAN. Also the core model is based on earlier VVER-1000 models. Nuclear data for the calculation was specified in the benchmark. The paper outlines the input models used for both codes. Calculated results are introduced both for the coupled core system with inlet and outlet boundary conditions and for the whole plant model. Sensitivity studies have been performed for selected parameters. (authors)
Hextran-Smabre calculation of the VVER-1000 coolant transient benchmark
International Nuclear Information System (INIS)
Elina Syrjaelahti; Anitta Haemaelaeinen
2005-01-01
Full text of publication follows: The VVER-1000 Coolant Transient benchmark is intended for validation of couplings of the thermal hydraulic codes and three dimensional neutron kinetic core models. It concerns a switching on a main coolant pump when the other three main coolant pumps are in operation. Problem is based on experiment performed in Kozloduy NPP in Bulgaria. In addition to the real plant transient, two extreme scenarios concerning control rod ejection after switching on a main coolant pump were calculated. In VTT the three-dimensional advanced nodal code HEXTRAN is used for the core kinetics and dynamics, and thermohydraulic system code SMABRE as a thermal hydraulic model for the primary and secondary loop. Parallelly coupled HEXTRAN-SMABRE code has been in production use since early 90's, and it has been extensively used for analysis of VVER NPPs. The SMABRE input model is based on the standard VVER-1000 input used in VTT. Last plant specific modifications to the input model have been made in EU projects. The whole core calculation is performed in the core with HEXTRAN. Also the core model is based on earlier VVER-1000 models. Nuclear data for the calculation was specified in the benchmark. The paper outlines the input models used for both codes. Calculated results are introduced both for the coupled core system with inlet and outlet boundary conditions and for the whole plant model. Sensitivity studies have been performed for selected parameters. (authors)
Neutron transport calculations of some fast critical assemblies
Energy Technology Data Exchange (ETDEWEB)
Martinez-Val Penalosa, J. A.
1976-07-01
To analyse the influence of the input variables of the transport codes upon the neutronic results (eigenvalues, generation times, . . . ) four Benchmark calculations have been performed. Sensitivity analysis have been applied to express these dependences in a useful way, and also to get an unavoidable experience to carry out calculations achieving the required accuracy and doing them in practical computing times. (Author) 29 refs.
Energy Technology Data Exchange (ETDEWEB)
Primm III, RT
2002-05-29
This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the US during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the computational benchmarks and for those experimental benchmarks that the US and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.
Criticality Analysis Of TCA Critical Lattices With MNCP-4C Monte Carlo Calculation
International Nuclear Information System (INIS)
Zuhair
2002-01-01
The use of uranium-plutonium mixed oxide (MOX) fuel in electric generation light water reactor (PWR, BWR) is being planned in Japan. Therefore, the accuracy evaluations of neutronic analysis code for MOX cores have been employed by many scientists and reactor physicists. Benchmark evaluations for TCA was done using various calculation methods. The Monte Carlo become the most reliable method to predict criticality of various reactor types. In this analysis, the MCNP-4C code was chosen because various superiorities the code has. All in all, the MCNP-4C calculation for TCA core with 38 MOX critical lattice configurations gave the results with high accuracy. The JENDL-3.2 library showed significantly closer results to the ENDF/B-V. The k eff values calculated with the ENDF/B-VI library gave underestimated results. The ENDF/B-V library gave the best estimation. It can be concluded that MCNP-4C calculation, especially with ENDF/B-V and JENDL-3.2 libraries, for MOX fuel utilized NPP design in reactor core is the best choice
Calculations to an IAHR-benchmark test using the CFD-code CFX-4
Energy Technology Data Exchange (ETDEWEB)
Krepper, E.
1998-10-01
The calculation concerns a test, which was defined as a benchmark for 3-D codes by the working group of advanced nuclear reactor types of IAHR (International Association of Hydraulic Research). The test is well documented and detailed measuring results are available. The test aims at the investigation of phenomena, which are important for heat removal at natural circulation conditions in a nuclear reactor. The task for the calculation was the modelling of the forced flow field of a single phase incompressible fluid with consideration of heat transfer and influence of gravity. These phenomena are typical also for other industrial processes. The importance of correct modelling of these phenomena also for other applications is a motivation for performing these calculations. (orig.)
WWER-1000 Burnup Credit Benchmark (CB5)
International Nuclear Information System (INIS)
Manolova, M.A.
2002-01-01
In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)
RB reactor as the U-D2O benchmark criticality system
International Nuclear Information System (INIS)
Pesic, M.
1998-01-01
From a rich and valuable database fro 580 different reactor cores formed up to now in the RB nuclear reactor, a selected and well recorded set is carefully chosen and preliminarily proposed as a new uranium-heavy water benchmark criticality system for validation od reactor design computer codes and data libraries. The first results of validation of the MCNP code and adjoining neutron cross section libraries are resented in this paper. (author)
Energy Technology Data Exchange (ETDEWEB)
Renner, Franziska [Physikalisch-Technische Bundesanstalt (PTB), Braunschweig (Germany)
2016-11-01
Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide.
Benchmark calculation of APOLLO2 and SLAROM-UF in a fast reactor lattice
International Nuclear Information System (INIS)
Hazama, Taira
2009-10-01
A lattice cell benchmark calculation is carried out for APOLLO2 and SLAROM-UF on the infinite lattice of a simple pin cell featuring a fast reactor. The accuracy in k-infinity and reaction rates is investigated in their reference and standard level calculations. In the 1st reference level calculation, APOLLO2 and SLAROM-UF agree with the reference value of k-infinity obtained by a continuous energy Monte Carlo calculation within 50 pcm. However, larger errors are observed in a particular reaction rate and energy range. A major problem common to both codes is in the cross section library of 239 Pu in the unresolved energy range. In the 2nd reference level calculation, which is based on the ECCO 1968 group structure, both results of k-infinity agree with the reference value within 100 pcm. The resonance overlap effect is observed by several percents in cross sections of heavy nuclides. In the standard level calculation based on the APOLLO2 library creation methodology, a discrepancy appears by more than 300 pcm. A restriction is revealed in APOLLO2. Its standard cross section library does not have a sufficiently small background cross section to evaluate the self-shielding effect of 56 Fe cross sections. The restriction can be removed by introducing the mixture self-shielding treatment recently introduced to APOLLO2. SLAROM-UF original standard level calculation based on the JFS-3 library creation methodology is the best among the standard level calculations. Improvement from the SLAROM-UF standard level calculation is achieved mainly by use of a proper weight function for light or intermediate nuclides. (author)
Benchmark calculation of APOLLO-2 and SLAROM-UF in a fast reactor lattice
International Nuclear Information System (INIS)
Hazama, T.
2009-07-01
A lattice cell benchmark calculation is carried out for APOLLO2 and SLAROM-UF on the infinite lattice of a simple pin cell featuring a fast reactor. The accuracy in k-infinity and reaction rates is investigated in their reference and standard level calculations. In the 1. reference level calculation, APOLLO2 and SLAROM-UF agree with the reference value of k-infinity obtained by a continuous energy Monte Carlo calculation within 50 pcm. However, larger errors are observed in a particular reaction rate and energy range. The major problem common to both codes is in the cross section library of 239 Pu in the unresolved energy range. In the 2. reference level calculation, which is based on the ECCO 1968 group structure, both results of k-infinity agree with the reference value within 100 pcm. The resonance overlap effect is observed by several percents in cross sections of heavy nuclides. In the standard level calculation based on the APOLLO2 library creation methodology, a discrepancy appears by more than 300 pcm. A restriction is revealed in APOLLO2. Its standard cross section library does not have a sufficiently small background cross section to evaluate the self shielding effect on 56 Fe cross sections. The restriction can be removed by introducing the mixture self-shielding treatment recently introduced to APOLLO2. SLAROM-UF original standard level calculation based on the JFS-3 library creation methodology is the best among the standard level calculations. Improvement from the SLAROM-UF standard level calculation is achieved mainly by use of a proper weight function for light or intermediate nuclides. (author)
Comparison between HELIOS calculations and a PWR cell benchmark for actinides transmutation
Energy Technology Data Exchange (ETDEWEB)
Guzman, Rafael [Facultad de Ingenieria, Universidad Nacional Autonoma de Mexico, Paseo Cuauhnahuac 8532, 62550 Jiutepec, Mor. (Mexico); Francois, Juan-Luis [Facultad de Ingenieria, Universidad Nacional Autonoma de Mexico, Paseo Cuauhnahuac 8532, 62550 Jiutepec, Mor. (Mexico)]. E-mail: jlfl@fi-b.unam.mx
2007-01-15
This paper shows a comparison between the results obtained with the HELIOS code and other similar codes used in the international community, with respect to the transmutation of actinides. To do this, the international benchmark: 'Calculations of Different Transmutation Concepts' of the Nuclear Energy Agency is analyzed. In this benchmark, two types of cells are analyzed: a small cell corresponding to a standard pressurized water reactor (PWR), and a wide cell corresponding to a highly moderated PWR. Two types of discharge burnup are considered: 33 GWd/tHM and 50 GWd/tHM. The following results are analyzed: the neutron multiplication factor as a function of burnup, the atomic density of the principal actinide isotopes, the radioactivity of selected actinides at reactor shutdown and cooling times from 7 until 50,000 years, the void reactivity and the Doppler reactivity. The results are compared with the following codes: KAPROS/KARBUS (FZK, Germany), SRAC95 (JAERI, Japan), TRIFON (ITTEP, Russian Federation) and WIMS (IPPE, Russian Federation). For the neutron multiplication factor, the results obtained with HELIOS show a difference of around 1% {delta}k/k. For the isotopic concentrations: {sup 241}Pu, {sup 242}Pu, and {sup 242m}Am, the results of all the institutions present a difference that increases at higher burnup; for the case of {sup 237}Np, the results of FZK diverges from the other results as the burnup increases. Regarding the activity, the difference of the results is acceptable, except for the case of {sup 241}Pu. For the Doppler coefficient, the results are acceptable, except for the cells with high moderation. In the case of the void coefficient, the difference of the results increases at higher void fractions, being the highest at 95%. In summary, for the PWR benchmark, the results obtained with HELIOS agree reasonably well within the limits of the multiple plutonium recycling established by the NEA working party on plutonium fuels and
Benchmark results for the critical slab and sphere problem in one-speed neutron transport theory
International Nuclear Information System (INIS)
Rawat, Ajay; Mohankumar, N.
2011-01-01
Research highlights: → The critical slab and sphere problem in neutron transport under Case eigenfunction formalism is considered. → These equations reduce to integral expressions involving X functions. → Gauss quadrature is not ideal but DE quadrature is well-suited. → Several fold decrease in computational effort with improved accuracy is realisable. - Abstract: In this paper benchmark numerical results for the one-speed criticality problem with isotropic scattering for the slab and sphere are reported. The Fredholm integral equations of the second kind based on the Case eigenfunction formalism are numerically solved by Neumann iterations with the Double Exponential quadrature.
International Nuclear Information System (INIS)
Key, S.W.
1985-01-01
The results of two calculations related to the impact response of spent nuclear fuel shipping casks are compared to the benchmark results reported in a recent study by the Japan Society of Mechanical Engineers Subcommittee on Structural Analysis of Nuclear Shipping Casks. Two idealized impacts are considered. The first calculation utilizes a right circular cylinder of lead subjected to a 9.0 m free fall onto a rigid target, while the second calculation utilizes a stainless steel clad cylinder of lead subjected to the same impact conditions. For the first problem, four calculations from graphical results presented in the original study have been singled out for comparison with HONDO III. The results from DYNA3D, STEALTH, PISCES, and ABAQUS are reproduced. In the second problem, the results from four separate computer programs in the original study, ABAQUS, ANSYS, MARC, and PISCES, are used and compared with HONDO III. The current version of HONDO III contains a fully automated implementation of the explicit-explicit partitioning procedure for the central difference method time integration which results in a reduction of computational effort by a factor in excess of 5. The results reported here further support the conclusion of the original study that the explicit time integration schemes with automated time incrementation are effective and efficient techniques for computing the transient dynamic response of nuclear fuel shipping casks subject to impact loading. (orig.)
Critical Assessment of Metagenome Interpretation-a benchmark of metagenomics software
DEFF Research Database (Denmark)
Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter
2017-01-01
Methods for assembly, taxonomic profiling and binning are key to interpreting metagenome data, but a lack of consensus about benchmarking complicates performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark...... their programs on highly complex and realistic data sets, generated from ∼700 newly sequenced microorganisms and ∼600 novel viruses and plasmids and representing common experimental setups. Assembly and genome binning programs performed well for species represented by individual genomes but were substantially...... affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below family level. Parameter settings markedly affected performance, underscoring their importance for program reproducibility. The CAMI...
Energy Technology Data Exchange (ETDEWEB)
Smith, R.H.; Keener, H.J.; DeClue, J.F.; Krass, A.W.
2001-04-01
This report considers the methods for determination of an upper safety limit, and incorporating uncertainty and margin into the safety limit, provides comparisons, and recommends a preferred method for determining the Upper Safety Limit (USL). A USL is developed for CSAS25 from SCALE4.4a. The USL is applicable for the CSAS25 control module from the SCALE 4.4a computer code system for use in evaluating nuclear criticality safety of enriched uranium systems. The benchmark calculation results used for this report are documented in Y/DD-896. The statistical evaluation is documented in CCG-380. The 27-group ENDF/B-IV, 44-group ENDF/B-V, and 238-group ENDF/B-V cross-section libraries were used. Numerical methods for applying margins are described, but the determination of appropriate correlating parameters and values for additional margin, applicable to a particular analysis, must be determined as part of a process analysis. As such, this document does not specify final upper subcritical limits as has been done in the past. No correlation between calculation results and neutron energy causing fission was found for the critical experiment results. Analysts using these results are responsible for exercising sound engineering judgment using strong technical arguments to develop ''a margin in k{sub eff} or other correlating parameter that is sufficiently large to ensure that conditions (calculated by this method to be subcritical by this margin) will actually be subcritical.'' Documentation of area of applicability and determination and justification of the appropriate margin in the analyst's evaluation, in conjunction with this report, will constitute the complete Validation Report in accordance with ANSI/ANS-8.1-1998, Section 4.3.6(4).
Benchmark calculations on fluid coupled co-axial cylinders typical to LMFBR structures
International Nuclear Information System (INIS)
Dostal, M.; Descleve, P.; Gantenbein, F.; Lazzeri, L.
1983-01-01
This paper describes a joint effort promoted and funded by the Commission of European Community under the umbrella of Fast Reactor Co-ordinating Committee and working group on Codes and Standards No. 2 with the purpose to test several programs currently used for dynamic analysis of fluid-coupled structures. The scope of the benchmark calculations is limited to beam type modes of vibration, small displacement of the structures and small pressure variation such as encountered in seismic or flow induced vibration problems. Five computer codes were used: ANSYS, AQUAMODE, NOVAX, MIAS/SAP4 and ZERO where each program employs a different structural-fluid formulation. The calculations were performed for four different geometrical configurations of concentric cylinders where the effect of gap size, water level, and support conditions were considered. The analytical work was accompanied by experiments carried out on a purpose-built rig. The test rig consisted of two concentric cylinders independently supported on flexible cantilevers. A geometrical simplicity and attention in the rig design to eliminate the structural coupling between the cylinders lead to unambiguous test results. Only the beam natural frequencies, in phase and out of phase were measured. The comparison of different analytical methods and experimental results is presented and discussed. The degree of agreement varied between very good and unacceptable. (orig./GL)
Benchmark calculations of present-day instantaneous radiative forcing in clear, aerosol-free skies
Pincus, Robert; Evans, K. Franklin; Manners, James; Paynter, David; Mlawer, Eli
2017-04-01
At the root of the effective radiative forcing driving climate change is the change in radiative flux at the top of the atmosphere due to changes in atmospheric composition - the so-called "instantaneous radiative forcing" (IRF). Estimates of global mean present-day instantaneous radiative forcing under cloud- and aerosol-free conditions show surprising diversity given the level of understanding of spectroscopy and radiative transfer. Much of this diversity, especially in estimates from climate models, is artificial, reflecting only differing errors and approximations in radiative transfer parameterizations. Calculations with more accurate line-by-line models have been considered far too expensive to be practical on a global scale. We report here on benchmark calculations of present-day instantaneous radiative forcing by greenhouse gases in the absence of clouds and aerosols made with very high spectral-resolution models. The problem is made computationally practical by defining a set of roughly 100 atmospheric profiles and associated weights obtained from present-day atmospheric conditions as represented by reanalysis via simulated annealing to reproduce global- and regional-mean fluxes with sampling errors of less than 1% (verified by cross-validation with independent radiative transfer models). Cloud- and aerosol-free IRF is then computed from these profiles using present-day and pre-industrial greenhouse gas concentrations. We report on results from two line-by-line and one high-resolution k-distribution model.
Benchmark Comparison for a Multi-Processing Ion Mobility Calculator in the Free Molecular Regime
Shrivastav, Vaibhav; Nahin, Minal; Hogan, Christopher J.; Larriba-Andaluz, Carlos
2017-08-01
A benchmark comparison between two ion mobility and collision cross-section (CCS) calculators, MOBCAL and IMoS, is presented here as a standard to test the efficiency and performance of both programs. Utilizing 47 organic ions, results are in excellent agreement between IMoS and MOBCAL in He and N2, when both programs use identical input parameters. Due to a more efficiently written algorithm and to its parallelization, IMoS is able to calculate the same CCS (within 1%) with a speed around two orders of magnitude faster than its MOBCAL counterpart when seven cores are used. Due to the high computational cost of MOBCAL in N2, reaching tens of thousands of seconds even for small ions, the comparison between IMoS and MOBCAL is stopped at 70 atoms. Large biomolecules (>10000 atoms) remain computationally expensive when IMoS is used in N2 (even when employing 16 cores). Approximations such as diffuse trajectory methods (DHSS, TDHSS) with and without partial charges and projected area approximation corrections can be used to reduce the total computational time by several folds without hurting the accuracy of the solution. These latter methods can in principle be used with coarse-grained model structures and should yield acceptable CCS results.
Calculation of Upper Subcritical Limits for Nuclear Criticality in a Repository
International Nuclear Information System (INIS)
J.W. Pegram
1998-01-01
The purpose of this document is to present the methodology to be used for development of the Subcritical Limit (SL) for post closure conditions for the Yucca Mountain repository. The SL is a value based on a set of benchmark criticality multiplier, k eff results that are outputs of the MCNP calculation method. This SL accounts for calculational biases and associated uncertainties resulting from the use of MCNP as the method of assessing k eff . The context for an SL estimate include the range of applicability (based on the set of MCNP results) and the type of SL required for the application at hand. This document will include illustrative calculations for each of three approaches. The data sets used for the example calculations are identified in Section 5.1. These represent three waste categories, and SLs for each of these sets of experiments will be computed in this document. Future MCNP data sets will be analyzed using the methods discussed here. The treatment of the biases evaluated on sets of k eff results via MCNP is statistical in nature. This document does not address additional non-statistical contributions to the bias margin, acknowledging that regulatory requirements may impose additional administrative penalties. Potentially, there are other biases or margins that should be accounted for when assessing criticality (k eff ). Only aspects of the bias as determined using the stated assumptions and benchmark critical data sets will be included in the methods and sample calculations in this document. The set of benchmark experiments used in the validation of the computational system should be representative of the composition, configuration, and nuclear characteristics for the application at hand. In this work, a range of critical experiments will be the basis of establishing the SL for three categories of waste types that will be in the repository. The ultimate purpose of this document is to present methods that will effectively characterize the MCNP
Criticality safety calculations of storage canisters
International Nuclear Information System (INIS)
Agrenius, L.
2002-04-01
In the planned Swedish repository for deep disposal of spent nuclear fuel the fuel assemblies will be stored in storage canisters made of cast iron and copper. To assure safe storage of the fuel the requirement is that the normal criticality safety criteria have to be met. The effective neutron multiplication factor must not exceed 0.95 in the most reactive conditions including different kinds of uncertainties. In this report it is shown that the criteria could be met if credit for the reactivity decrease due to the burn up of the fuel is taken into account. The criticality safety criteria are based on the US NRC regulatory requirements for transportation and storage of spent fuel
Comparisons of the MCNP criticality benchmark suite with ENDF/B-VI.8, JENDL-3.3, and JEFF-3.0
International Nuclear Information System (INIS)
Kim, Do Heon; Gil, Choong-Sup; Kim, Jung-Do; Chang, Jonghwa
2003-01-01
A comparative study has been performed with the latest evaluated nuclear data libraries ENDF/B-VI.8, JENDL-3.3, and JEFF-3.0. The study has been conducted through the benchmark calculations for 91 criticality problems with the libraries processed for MCNP4C. The calculation results have been compared with those of the ENDF60 library. The self-shielding effects of the unresolved-resonance (UR) probability tables have also been estimated for each library. The χ 2 differences between the MCNP results and experimental data were calculated for the libraries. (author)
Validation of the EIR LWR calculation methods for criticality assessment of storage pools
International Nuclear Information System (INIS)
Grimm, P.; Paratte, J.M.
1986-11-01
The EIR code system for the calculation of light water reactors is presented and the methods used are briefly described. The application of the system to various types of critical experiments and benchmark problems proves its good accuracy, even for heterogeneous configurations containing strong neutron absorbers such as Boral. Since the multiplication factor k eff is normally somewhat overpredicted and the spread of the results is small, this code system is validated for the calculation of storage pools, taking into account a safety margins of 1.5% on k eff . (author)
Parametric Criticality Safety Calculations for Arrays of TRU Waste Containers
Energy Technology Data Exchange (ETDEWEB)
Gough, Sean T. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-10-26
The Nuclear Criticality Safety Division (NCSD) has performed criticality safety calculations for finite and infinite arrays of transuranic (TRU) waste containers. The results of these analyses may be applied in any technical area onsite (e.g., TA-54, TA-55, etc.), as long as the assumptions herein are met. These calculations are designed to update the existing reference calculations for waste arrays documented in Reference 1, in order to meet current guidance on calculational methodology.
Attila calculations for the 3-D C5G7 benchmark extension
International Nuclear Information System (INIS)
Wareing, T.A.; McGhee, J.M.; Barnett, D.A.; Failla, G.A.
2005-01-01
The performance of the Attila radiation transport software was evaluated for the 3-D C5G7 MOX benchmark extension, a follow-on study to the MOX benchmark developed by the 'OECD/NEA Expert Group on 3-D Radiation Transport Benchmarks'. These benchmarks were designed to test the ability of modern deterministic transport methods to model reactor problems without spatial homogenization. Attila is a general purpose radiation transport software package with an integrated graphical user interface (GUI) for analysis, set-up and postprocessing. Attila provides solutions to the discrete-ordinates form of the linear Boltzmann transport equation on a fully unstructured, tetrahedral mesh using linear discontinuous finite-element spatial differencing in conjunction with diffusion synthetic acceleration of inner iterations. The results obtained indicate that Attila can accurately solve the benchmark problem without spatial homogenization. (authors)
Quantum computing applied to calculations of molecular energies: CH2 benchmark.
Veis, Libor; Pittner, Jiří
2010-11-21
Quantum computers are appealing for their ability to solve some tasks much faster than their classical counterparts. It was shown in [Aspuru-Guzik et al., Science 309, 1704 (2005)] that they, if available, would be able to perform the full configuration interaction (FCI) energy calculations with a polynomial scaling. This is in contrast to conventional computers where FCI scales exponentially. We have developed a code for simulation of quantum computers and implemented our version of the quantum FCI algorithm. We provide a detailed description of this algorithm and the results of the assessment of its performance on the four lowest lying electronic states of CH(2) molecule. This molecule was chosen as a benchmark, since its two lowest lying (1)A(1) states exhibit a multireference character at the equilibrium geometry. It has been shown that with a suitably chosen initial state of the quantum register, one is able to achieve the probability amplification regime of the iterative phase estimation algorithm even in this case.
International Nuclear Information System (INIS)
Chen Fubing; Dong Yujie; Zheng Yanhua; Shi Lei; Zhang Zuoyi
2009-01-01
Within the framework of a Coordinated Research Project on Evaluation of High Temperature Gas-Cooled Reactor Performance (CRP-5) initiated by the International Atomic Energy Agency (IAEA), the calculation of steady-state temperature distribution of the 10 MW High Temperature Gas-Cooled Reactor-Test Module (HTR-10) under its initial full power experimental operation has been defined as one of the benchmark problems. This paper gives the investigation results obtained by different countries who participate in solving this benchmark problem. The validation works of the THERMIX code used by the Institute of Nuclear and New Energy Technology (INET) are also presented. For the benchmark items defined in this CRP, various calculation results correspond well with each other and basically agree the experimental results. Discrepancies existing among various code results are preliminarily attributed to different methods, models, material properties, and so on used in the computations. Temperatures calculated by THERMIX for the measuring points in the reactor internals agree well with the experimental values. The maximum fuel center temperatures calculated by the participants are much lower than the limited value of 1,230degC. According to the comparison results of code-to-code as well as code-to-experiment, THERMIX is considered to reproduce relatively satisfactory results for the CRP-5 benchmark problem. (author)
Validation of KENO V.a for criticality safety calculations of low-enriched uranium-235 systems
International Nuclear Information System (INIS)
McCamis, R.H.
1991-02-01
The criticality safety analysis program KENO V.a, together with a 27-energy-group ENDF/B-IV criticality safety cross-section library, has been validated by comparison of calculations with the experimental results from critical benchmarks dealing with low-enriched (≤ 5 wt%) 235 U systems, obtained both from the literature and from recent AECL Research experiments with the SLOWPOKE Demonstration Reactor. The combination of the code and this data library is shown to be very suitable for criticality safety analyses of low-enriched 235 U systems, with mean values of the calculated reactivities being within 1% of the experimental values. (6 figs., 3 tabs., 37 refs.)
Quantum mechanical cluster calculations of critical scintillationprocesses
Energy Technology Data Exchange (ETDEWEB)
Derenzo, Stephen E.; Klintenberg, Mattias K.; Weber, Marvin J.
2000-02-22
This paper describes the use of commercial quantum chemistrycodes to simu-late several critical scintillation processes. The crystalis modeled as a cluster of typically 50 atoms embedded in an array oftypically 5,000 point charges designed to reproduce the electrostaticfield of the infinite crystal. The Schrodinger equation is solved for theground, ionized, and excited states of the system to determine the energyand electron wavefunction. Computational methods for the followingcritical processes are described: (1) the formation and diffusion ofrelaxed holes, (2) the formation of excitons, (3) the trapping ofelectrons and holes by activator atoms, (4) the excitation of activatoratoms, and (5) thermal quenching. Examples include hole diffusion in CsI,the exciton in CsI, the excited state of CsI:Tl, the energy barrier forthe diffusion of relaxed holes in CaF2 and PbF2, and prompt hole trappingby activator atoms in CaF2:Eu and CdS:Te leading to an ultra-fast (<50ps) scintillation risetime.
Energy Technology Data Exchange (ETDEWEB)
Briggs, J.B.; Bess, J. [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Gulliford, J. [Organization for Economic Cooperation and Development (OECD),Nuclear Energy Agency, Paris, (France)
2011-07-01
The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) are sources of evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, and safety analysis licensing activities. The IRPhEP is patterned after its predecessor, the ICSBEP, but focuses on other integral measurements such as buckling, spectral characteristics, reactivity effects, reactivity coefficients, kinetics measurements, reaction-rate and power distributions, nuclide compositions and other miscellaneous types of measurements in addition to the critical configuration. Both projects will be discussed.
RELAP5/MOD2 benchmarking study: Critical heat flux under low-flow conditions
International Nuclear Information System (INIS)
Ruggles, E.; Williams, P.T.
1990-01-01
Experimental studies by Mishima and Ishii performed at Argonne National Laboratory and subsequent experimental studies performed by Mishima and Nishihara have investigated the critical heat flux (CHF) for low-pressure low-mass flux situations where low-quality burnout may occur. These flow situations are relevant to long-term decay heat removal after a loss of forced flow. The transition from burnout at high quality to burnout at low quality causes very low burnout heat flux values. Mishima and Ishii postulated a model for the low-quality burnout based on flow regime transition from churn turbulent to annular flow. This model was validated by both flow visualization and burnout measurements. Griffith et al. also studied CHF in low mass flux, low-pressure situations and correlated data for upflows, counter-current flows, and downflows with the local fluid conditions. A RELAP5/MOD2 CHF benchmarking study was carried out investigating the performance of the code for low-flow conditions. Data from the experimental study by Mishima and Ishii were the basis for the benchmark comparisons
A thermo mechanical benchmark calculation of a hexagonal can in the BTI accident with INCA code
International Nuclear Information System (INIS)
Zucchini, A.
1988-01-01
The thermomechanical behaviour of an hexagonal can in a benchmark problem (simulating the conditions of a BTI accident in a fuel assembly) is examined by means of the INCA code and the results systematically compared with those of ADINA
International Nuclear Information System (INIS)
Suyama, K.; Uchida, Y.; Kashima, T.; Ito, T.; Miyaji, T.
2016-01-01
Criticality control of damaged nuclear fuel is one of the key issues in the decommissioning operation of the Fukushima Daiichi Nuclear Power Station accident. The average isotopic composition of spent nuclear fuel as a function of burn-up is required in order to evaluate criticality parameters of the mixture of damaged nuclear fuel with other materials. The NEA Expert Group on Burn-up Credit Criticality (EGBUC) has organised several international benchmarks to assess the accuracy of burn-up calculation methodologies. For BWR fuel, the Phase III-B benchmark, published in 2002, was a remarkable landmark that provided general information on the burn-up properties of BWR spent fuel based on the 8x8 type fuel assembly. Since the publication of the Phase III-B benchmark, all major nuclear data libraries have been revised; in Japan from JENDL-3.2 to JENDL-4, in Europe from JEF-2.2 to JEFF-3.1 and in the US from ENDF/B-VI to ENDF/B-VII.1. Burn-up calculation methodologies have been improved by adopting continuous-energy Monte Carlo codes and modern neutronics calculation methods. Considering the importance of the criticality control of damaged fuel in the Fukushima Daiichi Nuclear Power Station accident, a new international burn-up calculation benchmark for the 9 x 9 STEP-3 BWR fuel assemblies was organised to carry out the inter-comparison of the averaged isotopic composition in the interest of the burnup credit criticality safety community. Benchmark specifications were proposed and approved at the EGBUC meeting in September 2012 and distributed in October 2012. The deadline for submitting results was set at the end of February 2013. The basic model for the benchmark problem is an infinite two-dimensional array of BWR fuel assemblies consisting of a 9 x 9 fuel rod array with a water channel in the centre. The initial uranium enrichment of fuel rods without gadolinium is 4.9, 4.4, 3.9, 3.4 and 2.1 wt% and 3.4 wt% for the rods using gadolinium. The burn-up conditions are
Criticality benchmark results for the ENDF60 library with MCNP trademark
International Nuclear Information System (INIS)
Keen, N.D.; Frankle, S.C.; MacFarlane, R.E.
1995-01-01
The continuous-energy neutron data library ENDF60, for use with the Monte Carlo N-Particle radiation transport code MCNP4A, was released in the fall of 1994. The ENDF60 library is comprised of 124 nuclide data files based on the ENDF/B-VI (B-VI) evaluations through Release 2. Fifty-two percent of these B-VI evaluations are translations from ENDF/B-V (B-V). The remaining forty-eight percent are new evaluations which have sometimes changed significantly. Among these changes are greatly increased use of isotopic evaluations, more extensive resonance-parameter evaluations, and energy-angle correlated distributions for secondary particles. In particular, the upper energy limit for the resolved resonance region of 235 U, 238 U and 239 Pu has been extended from 0.082, 4.0, and 0.301 keV to 2..25, 10.0, and 2.5 keV respectively. As regulatory oversight has advanced and performing critical experiments has become more difficult, there has been an increased reliance on computational methods. For the criticality safety community, the performance of the combined transport code and data library is of interest. The purpose of this abstract is to provide benchmarking results to aid the user in determining the best data library for their application
Energy Technology Data Exchange (ETDEWEB)
Le, T.T.
1991-09-01
This report concerns the verification and validation of GILDA, a static two dimensional infinite lattice diffusion theory code. The verification was performed to determine if GILDA was applying the correct theory and that all the subroutines function as required. The validation was performed to determine the accuracy of the code by comparing the results of the code with the integral transport solutions (GLASS) of benchmark problems. Since GLASS uses multigroup integral transport theory, a more accurate method than fewgroup diffusion theory, using solutions from GLASS as reference solutions to benchmark GILDA is acceptable. Eight benchmark problems used in this process are infinite mixed lattice problems. The lattice is constructed by repeating an infinite number of identical super-cells (zones). Two types of super-cell have been used for these benchmark problems: one consists of six Mark22 assemblies surrounding one control assembly and the other consists of three Markl6 fuel assemblies and three Mark31 target assemblies surrounding a control assembly.
International Nuclear Information System (INIS)
Gillan, M. J.; Alfè, D.; Manby, F. R.
2015-01-01
The quantum Monte Carlo (QMC) technique is used to generate accurate energy benchmarks for methane-water clusters containing a single methane monomer and up to 20 water monomers. The benchmarks for each type of cluster are computed for a set of geometries drawn from molecular dynamics simulations. The accuracy of QMC is expected to be comparable with that of coupled-cluster calculations, and this is confirmed by comparisons for the CH 4 -H 2 O dimer. The benchmarks are used to assess the accuracy of the second-order Møller-Plesset (MP2) approximation close to the complete basis-set limit. A recently developed embedded many-body technique is shown to give an efficient procedure for computing basis-set converged MP2 energies for the large clusters. It is found that MP2 values for the methane binding energies and the cohesive energies of the water clusters without methane are in close agreement with the QMC benchmarks, but the agreement is aided by partial cancelation between 2-body and beyond-2-body errors of MP2. The embedding approach allows MP2 to be applied without loss of accuracy to the methane hydrate crystal, and it is shown that the resulting methane binding energy and the cohesive energy of the water lattice agree almost exactly with recently reported QMC values
International Nuclear Information System (INIS)
Smolen, G.R.
1987-01-01
Critical experiments have been conducted with organic-moderated mixed oxide (MOX) fuel pin assemblies at the Pacific Northwest Laboratory (PNL) Critical Mass Laboratory (CML). These experiments are part of a joint exchange program between the United States Department of Energy (USDOE) and the Power Reactor and Nuclear Fuel Development Corporation (PNC) of Japan in the area of criticality data development. The purpose of these experiments is to benchmark computer codes and cross-section libraries and to assess the reactivity difference between systems moderated by water and those moderated by an organic solution. Past studies have indicated that some organic mixtures may be better moderators than water. This topic is of particular importance to the criticality safety of fuel processing plants where fissile material is dissolved in organic solutions during the solvent extraction process. In the past, it has been assumed that the codes and libraries benchmarked with water-moderated experiments were adequate when performing design and licensing studies of organic-moderated systems. Calculations presented in this paper indicated that the SCALE code system and the 27-energy-group cross-section accurately compute k-effectives for organic moderated MOX fuel-pin assemblies. Furthermore, the reactivity of an organic solution with a 32-vol-% TBP/68-vol-% NPH mixture in a heterogeneous configuration is the same, for practical purposes, as water. 5 refs
MCNP Perturbation Capability for Monte Carlo Criticality Calculations
International Nuclear Information System (INIS)
Hendricks, J.S.; Carter, L.L.; McKinney, G.W.
1999-01-01
The differential operator perturbation capability in MCNP4B has been extended to automatically calculate perturbation estimates for the track length estimate of k eff in MCNP4B. The additional corrections required in certain cases for MCNP4B are no longer needed. Calculating the effect of small design changes on the criticality of nuclear systems with MCNP is now straightforward
Critical mass calculations using MCNP: An academic exercise
International Nuclear Information System (INIS)
Kastanya, Doddy
2015-01-01
Highlights: • Critical mass of Pu-239 is calculated. • MCNP is utilized to demonstrate that sphere is the optimal shape to reach criticality. • The critical masses from five polyhedrons and sphere are compared. - Abstract: In introductory courses for nuclear engineering, the concept of critical dimension and critical mass are introduced. Students are usually taught that the geometrical shape which needs the smallest amount of fissionable material to reach criticality is a sphere. In this paper, this concept is explored further using MCNP code. Five different regular polyhedrons (i.e., the Platonic solids) and a sphere have been examined to demonstrate that sphere is indeed the optimal geometrical shape to minimize the critical mass. For illustration purpose, the fissile isotope used in this study is 239 Pu, with a nominal density of 19.8 g/cm 3
Spectral measurements in critical assemblies: MCNP specifications and calculated results
Energy Technology Data Exchange (ETDEWEB)
Stephanie C. Frankle; Judith F. Briesmeister
1999-12-01
Recently, a suite of 86 criticality benchmarks for the Monte Carlo N-Particle (MCNP) transport code was developed, and the results of testing the ENDF/B-V and ENDF/B-VI data (through Release 2) were published. In addition to the standard k{sub eff} measurements, other experimental measurements were performed on a number of these benchmark assemblies. In particular, the Cross Section Evaluation Working Group (CSEWG) specifications contain experimental data for neutron leakage and central-flux measurements, central-fission ratio measurements, and activation ratio measurements. Additionally, there exists another set of fission reaction-rate measurements performed at the National Institute of Standards and Technology (NIST) utilizing a {sup 252}Cf source. This report will describe the leakage and central-flux measurements and show a comparison of experimental data to MCNP simulations performed using the ENDF/B-V and B-VI (Release 2) data libraries. Central-fission and activation reaction-rate measurements will be described, and the comparison of experimental data to MCNP simulations using available data libraries for each reaction of interest will be presented. Finally, the NIST fission reaction-rate measurements will be described. A comparison of MCNP results published previously with the current MCNP simulations will be presented for the NIST measurements, and a comparison of the current MCNP simulations to the experimental measurements will be presented.
Review of criticality safety benchmark data of plutonium solution in ICSBEP handbook
International Nuclear Information System (INIS)
Yamamoto, Toshihiro; Miyoshi, Yoshinori; Okubo, Kiyoshi
2003-01-01
The criticality data of plutonium solutions published in the ICSBEP Handbook were reviewed. Criticality data for lower plutonium concentration and higher 240 Pu content, which correspond to a reprocessing process condition, are very scarce and hence the criticality data in this area are desired. While the calculated k eff 's with ENDF/B-V show the dependence of the plutonium concentration, the dependence has been corrected in JENDL-3.3 because of energy distribution of the capture cross section of 239 Pu. Based on the generalized perturbation theory, the sensitivity coefficient of k eff with respect to fission and capture cross section in plutonium solutions were obtained. In a plutonium solution with a lower concentration, cross sections in the thermal energy less than 0.1 eV have significant effects on the criticality. On the other hand, the criticality of higher concentration plutonium solutions is mostly dominated by cross sections in the energy range larger than 0.1 eV. Regarding the effect of 240 Pu on criticality, the capture cross section 240 Pu around the resonance peak near 1 eV is dominant regardless of the concentration. (author)
Criticality calculation for cluster fuel bundles using grey Dancoff factor
International Nuclear Information System (INIS)
Hyeong Heon Kim; Nam Zin Cho
1999-01-01
This paper applies the grey Dancoff factor calculated by Monte Carlo method to the criticality calculation for cluster fuel bundles. Dancoff factors for five symmetrically different pin positions of CANDU37 and CANFLEX fuel bundles in full three-dimensional geometry are calculated by Monte Carlo method. The concept of equivalent Dancoff factor is introduced to use the grey Dancoff factor in the resonance calculation based on equivalence theorem. The equivalent Dancoff factor which is based on the realistic model produces an exact fuel collision probability and can be used in the resonance calculation just as the black Dancoff factor. The infinite multiplication factors based on the black Dancoff factors calculated by collision probability or Monte Carlo method are overestimated by about 2 mk for normal condition and 4 mk for void condition of CANDU37 and CANFLEX fuel bundles in comparison with those based on the equivalent Dancoff factors
DEFF Research Database (Denmark)
Murata, Katsuyuki; Budtz-Jørgensen, Esben; Grandjean, Philippe
2002-01-01
Methylmercury; benchmark dose; brainstem auditory evoked potentials; neurotoxicity; human health risk assessment......Methylmercury; benchmark dose; brainstem auditory evoked potentials; neurotoxicity; human health risk assessment...
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
International Nuclear Information System (INIS)
Preumont, A.; Shilab, S.; Cornaggia, L.; Reale, M.; Labbe, P.; Noe, H.
1992-01-01
This benchmark exercise is the continuation of the state-of-the-art review (EUR 11369 EN) which concluded that the random vibration approach could be an effective tool in seismic analysis of nuclear power plants, with potential advantages on time history and response spectrum techniques. As compared to the latter, the random vibration method provides an accurate treatment of multisupport excitations, non classical damping as well as the combination of high-frequency modal components. With respect to the former, the random vibration method offers direct information on statistical variability (probability distribution) and cheaper computations. The disadvantages of the random vibration method are that it is based on stationary results, and requires a power spectral density input instead of a response spectrum. A benchmark exercise to compare the three methods from the various aspects mentioned above, on one or several simple structures has been made. The following aspects have been covered with the simplest possible models: (i) statistical variability, (ii) multisupport excitation, (iii) non-classical damping. The random vibration method is therefore concluded to be a reliable method of analysis. Its use is recommended, particularly for preliminary design, owing to its computational advantage on multiple time history analysis
TRIGA FUEL PHASE I AND II CRITICALITY CALCULATION
International Nuclear Information System (INIS)
Angers, L.
1999-01-01
The purpose of this calculation is to characterize the criticality aspect of the codisposal of TRIGA (Training, Research, Isotopes, General Atomic) reactor spent nuclear fuel (SNF) with Savannah River Site (SRS) high-level waste (HLW). The TRIGA SNF is loaded into a Department of Energy (DOE) standardized SNF canister which is centrally positioned inside a five-canister defense SRS HLW waste package (WP). The objective of the calculation is to investigate the criticality issues for the WP containing the five SRS HLW and DOE SNF canisters in various stages of degradation. This calculation will support the analysis that will be performed to demonstrate the viability of the codisposal concept for the Monitored Geologic Repository (MGR)
DEFF Research Database (Denmark)
Mitzel, Jens; Gülzow, Erich; Kabza, Alexander
2016-01-01
This paper is focused on the identification of critical parameters and on the development of reliable methodologies to achieve comparable benchmark results. Possibilities for control sensor positioning and for parameter variation in sensitivity tests are discussed and recommended options for the ......This paper is focused on the identification of critical parameters and on the development of reliable methodologies to achieve comparable benchmark results. Possibilities for control sensor positioning and for parameter variation in sensitivity tests are discussed and recommended options...... in an average cell voltage deviation of 21 mV. Test parameters simulating different stack applications are summarized. The stack demonstrated comparable average cell voltage of 0.63 V for stationary and portable conditions. For automotive conditions, the voltage increased to 0.69 V, mainly caused by higher...
Criticality calculations with MCNP{sup TM}: A primer
Energy Technology Data Exchange (ETDEWEB)
Mendius, P.W. [ed.; Harmon, C.D. II; Busch, R.D.; Briesmeister, J.F.; Forster, R.A.
1994-08-01
The purpose of this Primer is to assist the nuclear criticality safety analyst to perform computer calculations using the Monte Carlo code MCNP. Because of the closure of many experimental facilities, reliance on computer simulation is increasing. Often the analyst has little experience with specific codes available at his/her facility. This Primer helps the analyst understand and use the MCNP Monte Carlo code for nuclear criticality analyses. It assumes no knowledge of or particular experience with Monte Carlo codes in general or with MCNP in particular. The document begins with a Quickstart chapter that introduces the basic concepts of using MCNP. The following chapters expand on those ideas, presenting a range of problems from simple cylinders to 3-dimensional lattices for calculating keff confidence intervals. Input files and results for all problems are included. The Primer can be used alone, but its best use is in conjunction with the MCNP4A manual. After completing the Primer, a criticality analyst should be capable of performing and understanding a majority of the calculations that will arise in the field of nuclear criticality safety.
EXTERNAL CRITICALITY CALCULATION FOR DOE SNF CODISPOSAL WASTE PACKAGES
International Nuclear Information System (INIS)
Radulescu, H.
2002-01-01
The purpose of this document is to evaluate the potential for criticality for the fissile material that could accumulate in the near-field (invert) and in the far-field (host rock) beneath the U.S. Department of Energy (DOE) spent nuclear fuel (SNF) codisposal waste packages (WPs) as they degrade in the proposed monitored geologic repository at Yucca Mountain. The scope of this calculation is limited to the following DOE SNF types: Shippingport Pressurized Water Reactor (PWR), Enrico Fermi, Fast Flux Test Facility (FFTF), Fort St. Vrain, Melt and Dilute, Shippingport Light Water Breeder Reactor (LWBR), N-Reactor, and Training, Research, Isotope, General Atomics reactor (TRIGA). The results of this calculation are intended to be used for estimating the probability of criticality in the near-field and in the far-field. There are no limitations on use of the results of this calculation. The calculation is associated with the waste package design and was developed in accordance with the technical work plan, ''Technical Work Plan for: Department of Energy Spent Nuclear Fuel and Plutonium Disposition Work Packages'' (Bechtel SAIC Company, LLC [BSC], 2002a). This calculation is subject to the Quality Assurance Requirements and Description (QARD) per the activity evaluation under work package number P6212310Ml in the technical work plan TWP-MGR-MD-0000 101 (BSC 2002a)
Directory of Open Access Journals (Sweden)
Tanaka Ken-ichi
2016-01-01
Full Text Available We performed benchmark calculation for radioactivity activated in a Primary Containment Vessel (PCV of a Boiling Water Reactor (BWR by using MAXS library, which was developed by collapsing with neutron energy spectra in the PCV of the BWR. Radioactivities due to neutron irradiation were measured by using activation foil detector of Gold (Au and Nickel (Ni at thirty locations in the PCV. We performed activation calculations of the foils with SCALE5.1/ORIGEN-S code with irradiation conditions of each foil location as the benchmark calculation. We compared calculations and measurements to estimate an effectiveness of MAXS library.
An efficient parallel computing scheme for Monte Carlo criticality calculations
International Nuclear Information System (INIS)
Dufek, Jan; Gudowski, Waclaw
2009-01-01
The existing parallel computing schemes for Monte Carlo criticality calculations suffer from a low efficiency when applied on many processors. We suggest a new fission matrix based scheme for efficient parallel computing. The results are derived from the fission matrix that is combined from all parallel simulations. The scheme allows for a practically ideal parallel scaling as no communication among the parallel simulations is required, and inactive cycles are not needed.
International Nuclear Information System (INIS)
Chiang, Min-Han; Wang, Jui-Yu; Sheu, Rong-Jiun; Liu, Yen-Wan Hsueh
2014-01-01
The High Temperature Engineering Test Reactor (HTTR) in Japan is a helium-cooled graphite-moderated reactor designed and operated for the future development of high-temperature gas-cooled reactors. Two detailed full-core models of HTTR have been established by using SCALE6 and MCNP5/X, respectively, to study its neutronic properties. Several benchmark problems were repeated first to validate the calculation models. Careful code-to-code comparisons were made to ensure that two calculation models are both correct and equivalent. Compared with experimental data, the two models show a consistent bias of approximately 20–30 mk overestimation in effective multiplication factor for a wide range of core states. Most of the bias could be related to the ENDF/B-VII.0 cross-section library or incomplete modeling of impurities in graphite. After that, a series of systematic analyses was performed to investigate the effects of cross sections on the HTTR criticality and burnup calculations, with special interest in the comparison between continuous-energy and multigroup results. Multigroup calculations in this study were carried out in 238-group structure and adopted the SCALE double-heterogeneity treatment for resonance self-shielding. The results show that multigroup calculations tend to underestimate the system eigenvalue by a constant amount of ∼5 mk compared to their continuous-energy counterparts. Further sensitivity studies suggest the differences between multigroup and continuous-energy results appear to be temperature independent and also insensitive to burnup effects
Energy Technology Data Exchange (ETDEWEB)
Chiang, Min-Han; Wang, Jui-Yu [Institute of Nuclear Engineering and Science, National Tsing Hua University, 101, Section 2, Kung-Fu Road, Hsinchu 30013, Taiwan (China); Sheu, Rong-Jiun, E-mail: rjsheu@mx.nthu.edu.tw [Institute of Nuclear Engineering and Science, National Tsing Hua University, 101, Section 2, Kung-Fu Road, Hsinchu 30013, Taiwan (China); Department of Engineering System and Science, National Tsing Hua University, 101, Section 2, Kung-Fu Road, Hsinchu 30013, Taiwan (China); Liu, Yen-Wan Hsueh [Institute of Nuclear Engineering and Science, National Tsing Hua University, 101, Section 2, Kung-Fu Road, Hsinchu 30013, Taiwan (China); Department of Engineering System and Science, National Tsing Hua University, 101, Section 2, Kung-Fu Road, Hsinchu 30013, Taiwan (China)
2014-05-01
The High Temperature Engineering Test Reactor (HTTR) in Japan is a helium-cooled graphite-moderated reactor designed and operated for the future development of high-temperature gas-cooled reactors. Two detailed full-core models of HTTR have been established by using SCALE6 and MCNP5/X, respectively, to study its neutronic properties. Several benchmark problems were repeated first to validate the calculation models. Careful code-to-code comparisons were made to ensure that two calculation models are both correct and equivalent. Compared with experimental data, the two models show a consistent bias of approximately 20–30 mk overestimation in effective multiplication factor for a wide range of core states. Most of the bias could be related to the ENDF/B-VII.0 cross-section library or incomplete modeling of impurities in graphite. After that, a series of systematic analyses was performed to investigate the effects of cross sections on the HTTR criticality and burnup calculations, with special interest in the comparison between continuous-energy and multigroup results. Multigroup calculations in this study were carried out in 238-group structure and adopted the SCALE double-heterogeneity treatment for resonance self-shielding. The results show that multigroup calculations tend to underestimate the system eigenvalue by a constant amount of ∼5 mk compared to their continuous-energy counterparts. Further sensitivity studies suggest the differences between multigroup and continuous-energy results appear to be temperature independent and also insensitive to burnup effects.
Riding Bare-Back on unstructured meshes for 21. century criticality calculations - 244
International Nuclear Information System (INIS)
Kelley, K.C.; Martz, R.L.; Crane, D.L.
2010-01-01
MCNP has a new capability that permits tracking of neutrons and photons on an unstructured mesh which is embedded as a mesh universe within its legacy geometry capability. The mesh geometry is created through Abaqus/CAE using its solid modeling capabilities. Transport results are calculated for mesh elements through a path length estimator while element to element tracking is performed on the mesh. The results from MCNP can be exported to Abaqus/CAE for visualization or other-physics analysis. The simple Godiva criticality benchmark problem was tested with this new mesh capability. Computer run time is proportional to the number of mesh elements used. Both first and second order polyhedrons are used. Models that used second order polyhedrons produced slightly better results without significantly increasing computer run time. Models that used first order hexahedrons had shorter runtimes than models that used first order tetrahedrons. (authors)
Determining the nuclear data uncertainty on MONK10 and WIMS10 criticality calculations
Directory of Open Access Journals (Sweden)
Ware Tim
2017-01-01
Full Text Available The ANSWERS Software Service is developing a number of techniques to better understand and quantify uncertainty on calculations of the neutron multiplication factor, k-effective, in nuclear fuel and other systems containing fissile material. The uncertainty on the calculated k-effective arises from a number of sources, including nuclear data uncertainties, manufacturing tolerances, modelling approximations and, for Monte Carlo simulation, stochastic uncertainty. For determining the uncertainties due to nuclear data, a set of application libraries have been generated for use with the MONK10 Monte Carlo and the WIMS10 deterministic criticality and reactor physics codes. This paper overviews the generation of these nuclear data libraries by Latin hypercube sampling of JEFF-3.1.2 evaluated data based upon a library of covariance data taken from JEFF, ENDF/B, JENDL and TENDL evaluations. Criticality calculations have been performed with MONK10 and WIMS10 using these sampled libraries for a number of benchmark models of fissile systems. Results are presented which show the uncertainty on k-effective for these systems arising from the uncertainty on the input nuclear data.
Determining the nuclear data uncertainty on MONK10 and WIMS10 criticality calculations
Ware, Tim; Dobson, Geoff; Hanlon, David; Hiles, Richard; Mason, Robert; Perry, Ray
2017-09-01
The ANSWERS Software Service is developing a number of techniques to better understand and quantify uncertainty on calculations of the neutron multiplication factor, k-effective, in nuclear fuel and other systems containing fissile material. The uncertainty on the calculated k-effective arises from a number of sources, including nuclear data uncertainties, manufacturing tolerances, modelling approximations and, for Monte Carlo simulation, stochastic uncertainty. For determining the uncertainties due to nuclear data, a set of application libraries have been generated for use with the MONK10 Monte Carlo and the WIMS10 deterministic criticality and reactor physics codes. This paper overviews the generation of these nuclear data libraries by Latin hypercube sampling of JEFF-3.1.2 evaluated data based upon a library of covariance data taken from JEFF, ENDF/B, JENDL and TENDL evaluations. Criticality calculations have been performed with MONK10 and WIMS10 using these sampled libraries for a number of benchmark models of fissile systems. Results are presented which show the uncertainty on k-effective for these systems arising from the uncertainty on the input nuclear data.
Optical rotation calculated with time-dependent density functional theory: the OR45 benchmark.
Srebro, Monika; Govind, Niranjan; de Jong, Wibe A; Autschbach, Jochen
2011-10-13
Time-dependent density functional theory (TDDFT) computations are performed for 42 organic molecules and three transition metal complexes, with experimental molar optical rotations ranging from 2 to 2 × 10(4) deg cm(2) dmol(-1). The performances of the global hybrid functionals B3LYP, PBE0, and BHLYP, and of the range-separated functionals CAM-B3LYP and LC-PBE0 (the latter being fully long-range corrected), are investigated. The performance of different basis sets is studied. When compared to liquid-phase experimental data, the range-separated functionals do, on average, not perform better than B3LYP and PBE0. Median relative deviations between calculations and experiment range from 25 to 29%. A basis set recently proposed for optical rotation calculations (LPol-ds) on average does not give improved results compared to aug-cc-pVDZ in TDDFT calculations with B3LYP. Individual cases are discussed in some detail, among them norbornenone for which the LC-PBE0 functional produced an optical rotation that is close to available data from coupled-cluster calculations, but significantly smaller in magnitude than the liquid-phase experimental value. Range-separated functionals and BHLYP perform well for helicenes and helicene derivatives. Metal complexes pose a challenge to first-principles calculations of optical rotation.
Han, Jeong-Hwan; Oda, Takuji
2018-04-01
The performance of exchange-correlation functionals in density-functional theory (DFT) calculations for liquid metal has not been sufficiently examined. In the present study, benchmark tests of Perdew-Burke-Ernzerhof (PBE), Armiento-Mattsson 2005 (AM05), PBE re-parameterized for solids, and local density approximation (LDA) functionals are conducted for liquid sodium. The pair correlation function, equilibrium atomic volume, bulk modulus, and relative enthalpy are evaluated at 600 K and 1000 K. Compared with the available experimental data, the errors range from -11.2% to 0.0% for the atomic volume, from -5.2% to 22.0% for the bulk modulus, and from -3.5% to 2.5% for the relative enthalpy depending on the DFT functional. The generalized gradient approximation functionals are superior to the LDA functional, and the PBE and AM05 functionals exhibit the best performance. In addition, we assess whether the error tendency in liquid simulations is comparable to that in solid simulations, which would suggest that the atomic volume and relative enthalpy performances are comparable between solid and liquid states but that the bulk modulus performance is not. These benchmark test results indicate that the results of liquid simulations are significantly dependent on the exchange-correlation functional and that the DFT functional performance in solid simulations can be used to roughly estimate the performance in liquid simulations.
International Nuclear Information System (INIS)
Li, M; Chetty, I; Zhong, H
2014-01-01
Purpose: Tumor control probability (TCP) calculated with accumulated radiation doses may help design appropriate treatment margins. Image registration errors, however, may compromise the calculated TCP. The purpose of this study is to develop benchmark CT images to quantify registration-induced errors in the accumulated doses and their corresponding TCP. Methods: 4DCT images were registered from end-inhale (EI) to end-exhale (EE) using a “demons” algorithm. The demons DVFs were corrected by an FEM model to get realistic deformation fields. The FEM DVFs were used to warp the EI images to create the FEM-simulated images. The two images combined with the FEM DVF formed a benchmark model. Maximum intensity projection (MIP) images, created from the EI and simulated images, were used to develop IMRT plans. Two plans with 3 and 5 mm margins were developed for each patient. With these plans, radiation doses were recalculated on the simulated images and warped back to the EI images using the FEM DVFs to get the accumulated doses. The Elastix software was used to register the FEM-simulated images to the EI images. TCPs calculated with the Elastix-accumulated doses were compared with those generated by the FEM to get the TCP error of the Elastix registrations. Results: For six lung patients, the mean Elastix registration error ranged from 0.93 to 1.98 mm. Their relative dose errors in PTV were between 0.28% and 6.8% for 3mm margin plans, and between 0.29% and 6.3% for 5mm-margin plans. As the PTV margin reduced from 5 to 3 mm, the mean TCP error of the Elastix-reconstructed doses increased from 2.0% to 2.9%, and the mean NTCP errors decreased from 1.2% to 1.1%. Conclusion: Patient-specific benchmark images can be used to evaluate the impact of registration errors on the computed TCPs, and may help select appropriate PTV margins for lung SBRT patients
2011-01-01
The adsorption of Ag, Au, and Pd atoms on benzene, coronene, and graphene has been studied using post Hartree–Fock wave function theory (CCSD(T), MP2) and density functional theory (M06-2X, DFT-D3, PBE, vdW-DF) methods. The CCSD(T) benchmark binding energies for benzene–M (M = Pd, Au, Ag) complexes are 19.7, 4.2, and 2.3 kcal/mol, respectively. We found that the nature of binding of the three metals is different: While silver binds predominantly through dispersion interactions, the binding of palladium has a covalent character, and the binding of gold involves a subtle combination of charge transfer and dispersion interactions as well as relativistic effects. We demonstrate that the CCSD(T) benchmark binding energies for benzene–M complexes can be reproduced in plane-wave density functional theory calculations by including a fraction of the exact exchange and a nonempirical van der Waals correction (EE+vdW). Applying the EE+vdW method, we obtained binding energies for the graphene–M (M = Pd, Au, Ag) complexes of 17.4, 5.6, and 4.3 kcal/mol, respectively. The trends in binding energies found for the benzene–M complexes correspond to those in coronene and graphene complexes. DFT methods that use empirical corrections to account for the effects of vdW interactions significantly overestimate binding energies in some of the studied systems. PMID:22076121
Criticality calculations with MCNP{trademark}: A primer
Energy Technology Data Exchange (ETDEWEB)
Harmon, C.D. II; Busch, R.D.; Briesmeister, J.F.; Forster, R.A. [New Mexico Univ., Albuquerque, NM (United States)
1994-06-06
With the closure of many experimental facilities, the nuclear criticality safety analyst increasingly is required to rely on computer calculations to identify safe limits for the handling and storage of fissile materials. However, in many cases, the analyst has little experience with the specific codes available at his/her facility. This primer will help you, the analyst, understand and use the MCNP Monte Carlo code for nuclear criticality safety analyses. It assumes that you have a college education in a technical field. There is no assumption of familiarity with Monte Carlo codes in general or with MCNP in particular. Appendix A gives an introduction to Monte Carlo techniques. The primer is designed to teach by example, with each example illustrating two or three features of MCNP that are useful in criticality analyses. Beginning with a Quickstart chapter, the primer gives an overview of the basic requirements for MCNP input and allows you to run a simple criticality problem with MCNP. This chapter is not designed to explain either the input or the MCNP options in detail; but rather it introduces basic concepts that are further explained in following chapters. Each chapter begins with a list of basic objectives that identify the goal of the chapter, and a list of the individual MCNP features that are covered in detail in the unique chapter example problems. It is expected that on completion of the primer you will be comfortable using MCNP in criticality calculations and will be capable of handling 80 to 90 percent of the situations that normally arise in a facility. The primer provides a set of basic input files that you can selectively modify to fit the particular problem at hand.
HEXTRAN-SMABRE calculation of the 6th AER Benchmark, main steam line break in a WWER-440 NPP
International Nuclear Information System (INIS)
Haemaelaeinen, A.; Kyrki-Rajamaeki, R.
2003-01-01
The sixth AER benchmark is the second AER benchmark for couplings of the thermal hydraulic codes and three dimensional neutron kinetic core models. It concerns a double end break of one main steam line in a WWER-440 plant. The core is at the end of its first cycle in full power conditions. In VTT HEXTRAN2.9 is used for the core kinetics and dynamics and SMABRE4.8 as a thermal hydraulic model for the primary and secondary loop. The plant model for SMABRE consists mainly of two input models, Loviisa model and a standard WWER-440/213 plant model. The primary loop includes six separate loops, the pressure vessel is divided into six parallel channels in SMABRE and the whole core calculation is performed in the core with HEXTRAN. The horizontal steam generators are modelled with heat transfer tubes in five levels and vertically with two parts, riser and downcomer. With this kind of detailed modelling of steam generators there occurs strong flashing after break opening. As a sequence of the main steam line break at nominal power level, the reactor trip is followed quite soon. The liquid temperature continues to decrease in one core inlet sector which may lead to recriticality and neuron power increase. The situation is very sensitive to small changes in the steam generator and break flow modelling and therefore several sensitivity calculations have been done. Also two stucked control rods have been assumed. Due to boric acid concentration in the high pressure safety injection subcriticality is finally guaranteed in the transient (Authors)
Development and application of critical calculation code KENO
International Nuclear Information System (INIS)
Li Sumei; Jin Wenmian
1995-01-01
KENO is a large code of Monte Carlo critical calculation, compiled by Oak Ridge National laboratory of U.S.A. and widely used in the world. It is improved all the time from the first appearing at 1969. Now it is up to KENO IV, KENO Va. They have great capacity of geometry, can be used in any problems of complex 3-dimension geometric figure, meanwhile, they can use cross section library of Hansen Roach 16 groups or AMPX type with their better ability of cross section. In addition, KENO Va has the characteristic in deal with large groups, which can save computer RAM. The program gives K eff as the main calculation. The paper introduces the major features of KENO program, its improvement, development, micro-computerization and popularized application
Criticality safety calculations for three types of final disposal canisters
International Nuclear Information System (INIS)
Anttila, M.
2005-07-01
The criticality safety of the copper/iron canisters developed for the final disposal of the Finnish spent nuclear fuel has been studied with the MCNP4C Monte Carlo code. Three types of spent fuel disposal canisters have been analysed. The differences between the canisters result from the size and geometry of the spent fuel assemblies to be disposed of in them. One canister type has been designed to contain 12 hexagonal VVER-440 fuel assemblies used at the Loviisa nuclear power plant ('VVER canister'). The second type is for 12 square BWR fuel bundles used at the Olkiluoto 1 and 2 units ( B WR canister ) and the third type is for four fuel assemblies of the Olkiluoto 3 unit to be constructed in the near future ( E PR canister ) . Each canister type is of similar size in the radial direction, but the axial lengths vary significantly. A spent fuel disposal canister must meet the normal criticality safety criteria. The effective multiplication factor must be less than 0.95 also when the canister is in the most reactive credible configuration (optimum moderation and close reflection). Uncertainties in the calculation methods may necessitate the use of an even lower reactivity limit. However, no systematic uncertainty analysis was carried out during this study. It has been proved in an earlier study that a version of the VVER canister loaded with twelve similar fresh VVER-440 assemblies with the initial enrichment of 4.2% fulfils the criticality safety criteria. Also an earlier design of the BWR canister loaded with twelve fresh BWR assemblies of so-called ATRIUM 10x10-9Q type with the initial enrichment of 3.8% and without burnable absorbers has been proved to meet the safety criteria. Therefore, in this study only a few calculations have been carried out for the present versions of VVER and BWR canisters and the results are in good agreement with the previous ones. The main emphasis of this study has been on the EPR canister. This new canister type fulfils the
Beridze, George; Kowalski, Piotr M
2014-12-18
Ability to perform a feasible and reliable computation of thermochemical properties of chemically complex actinide-bearing materials would be of great importance for nuclear engineering. Unfortunately, density functional theory (DFT), which on many instances is the only affordable ab initio method, often fails for actinides. Among various shortcomings, it leads to the wrong estimate of enthalpies of reactions between actinide-bearing compounds, putting the applicability of the DFT approach to the modeling of thermochemical properties of actinide-bearing materials into question. Here we test the performance of DFT+U method--a computationally affordable extension of DFT that explicitly accounts for the correlations between f-electrons - for prediction of the thermochemical properties of simple uranium-bearing molecular compounds and solids. We demonstrate that the DFT+U approach significantly improves the description of reaction enthalpies for the uranium-bearing gas-phase molecular compounds and solids and the deviations from the experimental values are comparable to those obtained with much more computationally demanding methods. Good results are obtained with the Hubbard U parameter values derived using the linear response method of Cococcioni and de Gironcoli. We found that the value of Coulomb on-site repulsion, represented by the Hubbard U parameter, strongly depends on the oxidation state of uranium atom. Last, but not least, we demonstrate that the thermochemistry data can be successfully used to estimate the value of the Hubbard U parameter needed for DFT+U calculations.
TMI-2 criticality studies: lower-vessel rubble and analytical benchmarking
International Nuclear Information System (INIS)
Westfall, R.M.; Knight, J.R.; Fox, P.B.; Herman, O.W.; Turner, J.C.
1985-12-01
A bounding strategy has been adopted for assuring subcriticality during all TMI-2 defueling operations. The strategy is based upon establishing a safe soluble boron level for the entire reactor core in an optimum reactivity configuration. This paper presents the determination of a fuel rubble model which yields a maximum infinite lattice multiplication factor and the subsequent application of cell-averaged constants in finite system analyses. Included in the analyses are the effects of fuel burnup determined from a simplified power history of the reactor. A discussion of the analytical methods employed and the determination of an analytical bias with benchmark crictical experiments completes the presentation. 17 tabs
Automatic fission source convergence criteria for Monte Carlo criticality calculations
International Nuclear Information System (INIS)
Shim, Hyung Jin; Kim, Chang Hyo
2005-01-01
The Monte Carlo criticality calculations for the multiplication factor and the power distribution in a nuclear system require knowledge of stationary or fundamental-mode fission source distribution (FSD) in the system. Because it is a priori unknown, so-called inactive cycle Monte Carlo (MC) runs are performed to determine it. The inactive cycle MC runs should be continued until the FSD converges to the stationary FSD. Obviously, if one stops them prematurely, the MC calculation results may have biases because the followup active cycles may be run with the non-stationary FSD. Conversely, if one performs the inactive cycle MC runs more than necessary, one is apt to waste computing time because inactive cycle MC runs are used to elicit the fundamental-mode FSD only. In the absence of suitable criteria for terminating the inactive cycle MC runs, one cannot but rely on empiricism in deciding how many inactive cycles one should conduct for a given problem. Depending on the problem, this may introduce biases into Monte Carlo estimates of the parameters one tries to calculate. The purpose of this paper is to present new fission source convergence criteria designed for the automatic termination of inactive cycle MC runs
DEFF Research Database (Denmark)
Cai, Xiao-Xiao; Llamas-Jansa, Isabel; Mullet, Steven
2013-01-01
, U and O in uranium dioxide, Al metal, Be metal, and Fe metal. The native HP cross section library G4NDL does not include data for elements with atomic number larger than 92. Therefore, transuranic elements, which have impacts for a realistic reactor, can not be simulated by the combination of the HP...
International Nuclear Information System (INIS)
Cho, Moon-Sung; Kim, Y. M.; Lee, Y. W.; Jeong, K. C.; Kim, Y. K.; Oh, S. C.
2006-01-01
The fundamental design for a gas-cooled reactor relies on an understanding of the behavior of a coated particle fuel. KAERI, which has been carrying out the Korean VHTR (Very High Temperature modular gas cooled Reactor) Project since 2004, is developing a fuel performance analysis code for a VHTR named COPA (COated Particle fuel Analysis). COPA predicts temperatures, stresses, a fission gas release and failure probabilities of a coated particle fuel in normal operating conditions. Validation of COPA in the process of its development is realized partly by participating in the benchmark section of the international CRP-6 program led by IAEA which provides comprehensive benchmark problems and analysis results obtained from the CRP-6 member countries. Apart from the validation effort through the CRP-6, a validation of COPA was attempted by comparing its benchmark results with the visco-elastic solutions obtained from the ABAQUS code calculations for the same CRP-6 TRISO coated particle benchmark problems involving creep, swelling, and pressure. The study shows the calculation results of the IAEA-CRP-6 benchmark cases 5 through 7 by using the ABAQUS FE model for a comparison with the COPA results
Monte Carlo benchmark calculations of energy deposition by electron/photon showers up to 1 GeV
International Nuclear Information System (INIS)
Mehlhorn, T.A.; Halbleib, J.A.
1983-01-01
Over the past several years the TIGER series of coupled electron/photon Monte Carlo transport codes has been applied to a variety of problems involving nuclear and space radiations, electron accelerators, and radioactive sources. In particular, they have been used at Sandia to simulate the interaction of electron beams, generated by pulsed-power accelerators, with various target materials for weapons effect simulation, and electron beam fusion. These codes are based on the ETRAN system which was developed for an energy range from about 10 keV up to a few tens of MeV. In this paper we will discuss the modifications that were made to the TIGER series of codes in order to extend their applicability to energies of interest to the high energy physics community (up to 1 GeV). We report the results of a series of benchmark calculations of the energy deposition by high energy electron beams in various materials using the modified codes. These results are then compared with the published results of various experimental measurements and other computational models
International Nuclear Information System (INIS)
Grant, C.; Mollerach, R.; Leszczynski, F.; Serra, O.; Marconi, J.; Fink, J.
2006-01-01
In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure vessel design with 451 vertical coolant channels and the fuel assemblies (FA) are clusters of 37 natural UO 2 rods with an active length of 530 cm. For the reactor physics area, a revision and update of reactor physics calculation methods and models was recently carried out covering cell, supercell (control rod) and core calculations. This paper presents benchmark comparisons of core parameters of a slightly idealized model of the Atucha-I core obtained with the PUMA reactor code with MCNP5. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, more symmetric than Atucha-II, and has some experimental data available. To validate the new models benchmark comparisons of k-effective, channel power and axial power distributions obtained with PUMA and MCNP5 have been performed. In addition, a simple cell heterogeneity correction recently introduced in PUMA is presented, which improves significantly the agreement of calculated channel powers with MCNP5. To complete the validation, the calculation of some of the critical configurations of the Atucha-I reactor measured during the experiments performed at first criticality is also presented. (authors)
Energy Technology Data Exchange (ETDEWEB)
Fang, Zongtang; Both, Johan; Li, Shenggang; Yue, Shuwen; Aprà, Edoardo; Keçeli, Murat; Wagner, Albert F.; Dixon, David A.
2016-08-09
The heats of formation and the normalized clustering energies (NCEs) for the group 4 and group 6 transition metal oxide (TMO) trimers and tetramers have been calculated by the Feller-Peterson-Dixon (FPD) method. The heats of formation predicted by the FPD method do not differ much from those previously derived from the NCEs at the CCSD(T)/aT level except for the CrO3 nanoclusters. New and improved heats of formation for Cr3O9 and Cr4O12 were obtained using PW91 orbitals instead of Hartree-Fock (HF) orbitals. Diffuse functions are necessary to predict accurate heats of formation. The fluoride affinities (FAs) are calculated with the CCSD(T) method. The relative energies (REs) of different isomers, NCEs, electron affinities (EAs), and FAs of (MO2)n ( M = Ti, Zr, Hf, n = 1 – 4 ) and (MO3)n ( M = Cr, Mo, W, n = 1 – 3) clusters have been benchmarked with 55 exchange-correlation DFT functionals including both pure and hybrid types. The absolute errors of the DFT results are mostly less than ±10 kcal/mol for the NCEs and the EAs, and less than ±15 kcal/mol for the FAs. Hybrid functionals usually perform better than the pure functionals for the REs and NCEs. The performance of the two types of functionals in predicting EAs and FAs is comparable. The B1B95 and PBE1PBE functionals provide reliable energetic properties for most isomers. Long range corrected pure functionals usually give poor FAs. The standard deviation of the absolute error is always close to the mean errors and the probability distributions of the DFT errors are often not Gaussian (normal). The breadth of the distribution of errors and the maximum probability are dependent on the energy property and the isomer.
Energy Technology Data Exchange (ETDEWEB)
Bess, J. D.; Briggs, J. B.; Gulliford, J.; Ivanova, T.; Rozhikhin, E. V.; Semenov, M. Yu.; Tsibulya, A. M.; Koscheev, V. N.
2017-07-01
is the critical experiments with fast reactor fuel rods in water, interesting in terms of justification of nuclear safety during transportation and storage of fresh and spent fuel. These reports provide a detailed review of the experiment, designate the area of their application and include results of calculations on modern systems of constants in comparison with the estimated experimental data.
Nuclear criticality safety calculational analysis for small-diameter containers
International Nuclear Information System (INIS)
LeTellier, M.S.; Smallwood, D.J.; Henkel, J.A.
1995-11-01
This report documents calculations performed to establish a technical basis for the nuclear criticality safety of favorable geometry containers, sometimes referred to as 5-inch containers, in use at the Portsmouth Gaseous Diffusion Plant. A list of containers currently used in the plant is shown in Table 1.0-1. These containers are currently used throughout the plant with no mass limits. The use of containers with geometries or material types other than those addressed in this evaluation must be bounded by this analysis or have an additional analysis performed. The following five basic container geometries were modeled and bound all container geometries in Table 1.0-1: (1) 4.32-inch-diameter by 50-inch-high polyethylene bottle; (2) 5.0-inch-diameter by 24-inch-high polyethylene bottle; (3) 5.25-inch-diameter by 24-inch-high steel can (open-quotes F-canclose quotes); (4) 5.25-inch-diameter by 15-inch-high steel can (open-quotes Z-canclose quotes); and (5) 5.0-inch-diameter by 9-inch-high polybottle (open-quotes CO-4close quotes). Each container type is evaluated using five basic reflection and interaction models that include single containers and multiple containers in normal and in credible abnormal conditions. The uranium materials evaluated are UO 2 F 2 +H 2 O and UF 4 +oil materials at 100% and 10% enrichments and U 3 O 8 , and H 2 O at 100% enrichment. The design basis safe criticality limit for the Portsmouth facility is k eff + 2σ < 0.95. The KENO study results may be used as the basis for evaluating general use of these containers in the plant
International Nuclear Information System (INIS)
Kooyman, Timothee; Messaoudia, Nadia
2014-01-01
A sensitivity study on a set of evaluated criticality benchmarks with two versions of the JEFF nuclear data library, namely JEFF-3.1.2 and JEFF-3.2T, and ENDF/B-VII.1 was performed using MNCP(X) 2.6.0. As these benchmarks serve to estimate the upper safety limit for criticality risk analysis at SCK.CEN the sensitivity of their results to nuclear data is an important parameter to asses. Several nuclides were identified as being responsible for an evident change in the effective multiplication factor k eff : 235 U, 239 Pu, 240 Pu, 54 Fe, 56 Fe, 57 Fe and 208 Pb. A high sensitivity was found to the fission cross-section of all the fissile material in the study. Additionally, a smaller sensitivity to inelastic and capture cross-section of 235 U and 240 Pu was also found. Sensitivity to the scattering law for non-fissile material was postulated. The biggest change in the k eff due to non-fissile material was due to 208 Pb evaluation (±700 pcm), followed by 56 Fe (±360 pcm) for both versions of the JEFF library. Changes due to 235 U (±300 pcm) and Pu isotopes (±120 pcm for 239 Pu and ±80 pcm for 240 Pu) were found only with JEFF-3.1.2. 238 U was found to have no effect on the k eff . Significant improvements were identified between the two versions of the JEFF library. No further differences were found between the JEFF-3.2T and the ENDF/B-VII.1 calculations involving 235 U or Pu. (authors)
Improved estimation of the variance in Monte Carlo criticality calculations
International Nuclear Information System (INIS)
Hoogenboom, J. Eduard
2008-01-01
Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k eff results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k eff will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k eff are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)
Energy Technology Data Exchange (ETDEWEB)
Kotsarev, Alexander; Lizorkin, Mikhail [National Research Centre ' Kurchatov Institute' , Moscow (Russian Federation); Bencik, Marek; Hadek, Jan [UJV Rez, a.s., Rez (Czech Republic); Kozmenkov, Yaroslav; Kliem, Soeren [Helmholtz-Zentrum Dresden-Rossendorf (HZDR) e.V., Dresden (Germany)
2016-09-15
The 7th AER dynamic benchmark is a continuation of the efforts to validate the codes systematically for the estimation of the transient behavior of VVER type nuclear power plants. The main part of the benchmark is the simulation of the re-connection of an isolated circulation loop with low temperature in a VVER-440 plant. This benchmark was calculated by the National Research Centre ''Kurchatov Institute'' (with the code ATHLET/BIPR-VVER), UJV Rez (with the code RELAP5-3D {sup copyright}) and HZDR (with the code DYN3D/ATHLET). The paper gives an overview of the behavior of the main thermal hydraulic and neutron kinetic parameters in the provided solutions.
Benchmarks of subcriticality in accelerator-driven system at Kyoto University Critical Assembly
Directory of Open Access Journals (Sweden)
Cheol Ho Pyeon
2017-09-01
Full Text Available Basic research on the accelerator-driven system is conducted by combining 235U-fueled and 232Th-loaded cores in the Kyoto University Critical Assembly with the pulsed neutron generator (14 MeV neutrons and the proton beam accelerator (100 MeV protons with a heavy metal target. The results of experimental subcriticality are presented with a wide range of subcriticality level between near critical and 10,000 pcm, as obtained by the pulsed neutron source method, the Feynman-α method, and the neutron source multiplication method.
Energy Technology Data Exchange (ETDEWEB)
Hovi, Ville; Taivassalo, Veikko; Haemaelaeinen, Anitta; Raety, Hanna; Syrjaelahti, Elina [VTT Technical Research Centre of Finland Ltd, VTT (Finland)
2017-09-15
The 7{sup th} dynamic AER benchmark is the first in which three-dimensional thermal hydraulics codes are supposed to be applied. The aim is to get a more precise core inlet temperature profile than the sector temperatures available typically with system codes. The benchmark consists of a start-up of the sixth, isolated loop in a VVER-440 plant. The isolated loop initially contains cold water without boric acid and the start-up leads to a somewhat asymmetrical core power increase due to feedbacks in the core. In this study, the 7{sup th} AER benchmark is calculated with the three-dimensional nodal reactor dynamics code HEXTRAN-SMABRE coupled with the porous computational fluid dynamics code PORFLO. These three codes are developed at VTT. A novel two-way coupled simulation of the 7{sup th} AER benchmark was performed successfully demonstrating the feasibility and advantages of the new reactor analysis framework. The modelling issues for this benchmark are reported and some evaluation against the previously reported comparisons between the system codes is provided.
Start-up of a cold loop in a VVER-440, the 7th AER benchmark calculation with HEXTRAN-SMABRE-PORFLO
International Nuclear Information System (INIS)
Hovi, Ville; Taivassalo, Veikko; Haemaelaeinen, Anitta; Raety, Hanna; Syrjaelahti, Elina
2017-01-01
The 7 th dynamic AER benchmark is the first in which three-dimensional thermal hydraulics codes are supposed to be applied. The aim is to get a more precise core inlet temperature profile than the sector temperatures available typically with system codes. The benchmark consists of a start-up of the sixth, isolated loop in a VVER-440 plant. The isolated loop initially contains cold water without boric acid and the start-up leads to a somewhat asymmetrical core power increase due to feedbacks in the core. In this study, the 7 th AER benchmark is calculated with the three-dimensional nodal reactor dynamics code HEXTRAN-SMABRE coupled with the porous computational fluid dynamics code PORFLO. These three codes are developed at VTT. A novel two-way coupled simulation of the 7 th AER benchmark was performed successfully demonstrating the feasibility and advantages of the new reactor analysis framework. The modelling issues for this benchmark are reported and some evaluation against the previously reported comparisons between the system codes is provided.
International Nuclear Information System (INIS)
Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.
1991-01-01
Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems
Benchmark critical experiments on low-enriched uranium oxide systems with H/U = 0.77
International Nuclear Information System (INIS)
Tuck, G.; Oh, I.
1979-08-01
Ten benchmark experiments were performed at the Critical Mass Laboratory at Rockwell International's Rocky Flats Plant, Golden, Colorado, for the US Nuclear Regulatory Commission. They provide accurate criticality data for low-enriched damp uranium oxide (U 3 O 8 ) systems. The core studied consisted of 152 mm cubical aluminum cans containing an average of 15,129 g of low-enriched (4.46% 235 U) uranium oxide compacted to a density of 4.68 g/cm 3 and with an H/U atomic ratio of 0.77. One hundred twenty five (125) of these cans were arranged in an approx. 770 mm cubical array. Since the oxide alone cannot be made critical in an array of this size, an enriched (approx. 93% 235 U) metal or solution driver was used to achieve criticality. Measurements are reported for systems having the least practical reflection and for systems reflected by approx. 254-mm-thick concrete or plastic. Under the three reflection conditions, the mass of the uranium metal driver ranged from 29.87 kg to 33.54 kg for an oxide core of 1864.6 kg. For an oxide core of 1824.9 kg, the weight of the high concentration (351.2 kg U/m 3 ) solution driver varied from 14.07 kg to 16.14 kg, and the weight of the low concentration (86.4 kg U/m 3 ) solution driver from 12.4 kg to 14.0 kg
Energy Technology Data Exchange (ETDEWEB)
Kozier, K. S.; Roubtsov, D. [AECL, Chalk River Laboratories, Chalk River, ON (Canada); Plompen, A. J. M.; Kopecky, S. [EC-JRC, Inst. for Reference Materials and Measurements, Retieseweg 111, 2440 Geel (Belgium)
2012-07-01
The thermal neutron-elastic-scattering cross-section data for {sup 16}O used in various modern evaluated-nuclear-data libraries were reviewed and found to be generally too high compared with the best available experimental measurements. Some of the proposed revisions to the ENDF/B-VII.0 {sup 16}O data library and recent results from the TENDL system increase this discrepancy further. The reactivity impact of revising the {sup 16}O data downward to be consistent with the best measurements was tested using the JENDL-3.3 {sup 16}O cross-section values and was found to be very small in MCNP5 simulations of the UO{sub 2} and reactor-recycle MOX-fuel cases of the ANS Doppler-defect numerical benchmark. However, large reactivity differences of up to about 14 mk (1400 pcm) were observed using {sup 16}O data files from several evaluated-nuclear-data libraries in MCNP5 simulations of the Los Alamos National Laboratory HEU heavy-water solution thermal critical experiments, which were performed in the 1950's. The latter result suggests that new measurements using HEU in a heavy-water-moderated critical facility, such as the ZED-2 zero-power reactor at the Chalk River Laboratories, might help to resolve the discrepancy between the {sup 16}O thermal elastic-scattering cross-section values and thereby reduce or better define its uncertainty, although additional assessment work would be needed to confirm this. (authors)
International Nuclear Information System (INIS)
Mark Dennis Usang; Mohd Hairie Rabir; Mohd Amin Sharifuldin Salleh; Mohamad Puad Abu
2012-01-01
MPI parallelism are implemented on a SUN Workstation for running MCNPX and on the High Performance Computing Facility (HPC) for running MCNP5. 23 input less obtained from MCNP Criticality Validation Suite are utilized for the purpose of evaluating the amount of speed up achievable by using the parallel capabilities of MPI. More importantly, we will study the economics of using more processors and the type of problem where the performance gain are obvious. This is important to enable better practices of resource sharing especially for the HPC facilities processing time. Future endeavours in this direction might even reveal clues for best MCNP5/ MCNPX coding practices for optimum performance of MPI parallelisms. (author)
A thermo-mechanical benchmark calculation of an hexagonal can in the BTI accident with ABAQUS code
International Nuclear Information System (INIS)
Zucchini, A.
1988-07-01
The thermo-mechanical behaviour of an hexagonal can in a benchmark problem (simulating the conditions of a BTI accident in a fuel assembly) is examined by means of the ABAQUS code: the effects of the geometric nonlinearity are shown and the results are compared with those of a previous analysis performed with the INCA code. (author)
Sylvetsky, Nitai; Kesharwani, Manoj K; Martin, Jan M L
2017-10-07
We have developed a new basis set family, denoted as aug-cc-pVnZ-F12 (or aVnZ-F12 for short), for explicitly correlated calculations. The sets included in this family were constructed by supplementing the corresponding cc-pVnZ-F12 sets with additional diffuse functions on the higher angular momenta (i.e., additional d-h functions on non-hydrogen atoms and p-g on hydrogen atoms), optimized for the MP2-F12 energy of the relevant atomic anions. The new basis sets have been benchmarked against electron affinities of the first- and second-row atoms, the W4-17 dataset of total atomization energies, the S66 dataset of noncovalent interactions, the Benchmark Energy and Geometry Data Base water cluster subset, and the WATER23 subset of the GMTKN24 and GMTKN30 benchmark suites. The aVnZ-F12 basis sets displayed excellent performance, not just for electron affinities but also for noncovalent interaction energies of neutral and anionic species. Appropriate CABSs (complementary auxiliary basis sets) were explored for the S66 noncovalent interaction benchmark: between similar-sized basis sets, CABSs were found to be more transferable than generally assumed.
Merger of Nuclear Data with Criticality Safety Calculations
Energy Technology Data Exchange (ETDEWEB)
Derrien, H.; Larson, N.M.; Leal, L.C.
1999-09-20
In this paper we report on current activities related to the merger of differential/integral data (especially in the resolved-resonance region) with nuclear criticality safety computations. Techniques are outlined for closer coupling of many processes � measurement, data reduction, differential-data analysis, integral-data analysis, generating multigroup cross sections, data-testing, criticality computations � which in the past have been treated independently.
Merger of Nuclear Data with Criticality Safety Calculations
International Nuclear Information System (INIS)
Derrien, H.; Larson, N.M.; Leal, L.C.
1999-01-01
In this paper we report on current activities related to the merger of differential/integral data (especially in the resolved-resonance region) with nuclear criticality safety computations. Techniques are outlined for closer coupling of many processes measurement, data reduction, differential-data analysis, integral-data analysis, generating multigroup cross sections, data-testing, criticality computations which in the past have been treated independently
International Nuclear Information System (INIS)
Nguyen Kien Cuong; Vo Doan Hai Dang; Luong Ba Vien; Le Vinh Vinh; Huynh Ton Nghiem; Nguyen Minh Tuan; Nguyen Manh Hung; Pham Quang Huy; Tran Quoc Duong; Tran Tri Vien
2015-01-01
Basing on the idea in ??using fuel of nuclear power plants such as PWR (AP-1000) and VVER-1000 with light water as moderation, design calculation of critical assembly was performed to confirm the possibility of using these fuels. Designed critical assembly has simple structure consisting of low enriched fuel from 1.6% to 5% U-235; water has functions as cooling, biological protection and control. Critical assembly is operated at nominal power 100 W with fuel pitch about 2.0 cm. Applications of the critical assembly are quite abundant in basic research, education and training with low investment cost compare with research reactor and easy in operation. So critical assembly can be used for university or training centre for nuclear engineering training. Main objectives of the project are: design calculation in neutronics, thermal hydraulics and safety analysis for critical configuration benchmarks using low enriched fuel; design in mechanical and auxiliary systems for critical assembly; determine technical specifications and estimate construction, installation cost of critical assembly. The process of design, fabrication, installation and construction of critical assembly will be considered with different implementation phases and localization capabilities in installation of critical assembly is highly feasibility. Cost estimation of construction and installation of critical assembly was implemented and showed that investment cost for critical assembly is much lower than research reactor and most of components, systems of critical assembly can be localized with current technique quality of the country. (author)
International Nuclear Information System (INIS)
Lopez Aldama, D.; Rodriguez Gual, R.
1998-01-01
Presently work intends to validate the models and programs used in the Nuclear Technology Center for calculating the critical position of control rods by means of the analysis of the measurements performed at the critical facility IPEN/MB-01. The lattice calculations were carried out with the WIMS/D4 code and for the global calculations the diffusion code SNAP-3D was used
Comparison of MCNPX and Albedo method in criticality calculation
International Nuclear Information System (INIS)
Cunha, Victor L. Lassance; Rebello, Wilson F.; Cabral, Ronaldo G.; Melo, Fernando da S.; Silva, Ademir X. da
2009-01-01
This study aims to conduct a computer simulation that will calculate the reactivity of a homogeneous reactor and compare the results with the calculations made by the albedo method. The simulation will be developed using the MCNPX. The study compared the results calculated for a hypothetical reactor by the albedo method for four groups of energy with those obtained by the MCNPX simulation. The design of the reactor is spherical and homogeneous with a reflector of finite thickness. The value obtained for the neutron effective multiplication factor - k eff will be compared. Different situations were simulated in order to obtain results closer to the compared method and reality. The was Good consistency could be noticed between the calculated results. (author)
Energy Technology Data Exchange (ETDEWEB)
J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori
2007-05-01
Since ICNC 2003, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) has continued to expand its efforts and broaden its scope. Criticality-alarm / shielding type benchmarks and fundamental physics measurements that are relevant to criticality safety applications are not only included in the scope of the project, but benchmark data are also included in the latest version of the handbook. A considerable number of improvements have been made to the searchable database, DICE and the criticality-alarm / shielding benchmarks and fundamental physics measurements have been included in the database. There were 12 countries participating on the ICSBEP in 2003. That number has increased to 18 with recent contributions of data and/or resources from Brazil, Czech Republic, Poland, India, Canada, and China. South Africa, Germany, Argentina, and Australia have been invited to participate. Since ICNC 2003, the contents of the “International Handbook of Evaluated Criticality Safety Benchmark Experiments” have increased from 350 evaluations (28,000 pages) containing benchmark specifications for 3070 critical or subcritical configurations to 442 evaluations (over 38,000 pages) containing benchmark specifications for 3957 critical or subcritical configurations, 23 criticality-alarm-placement / shielding configurations with multiple dose points for each, and 20 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications in the 2006 Edition of the ICSBEP Handbook. Approximately 30 new evaluations and 250 additional configurations are expected to be added to the 2007 Edition of the Handbook. Since ICNC 2003, a reactor physics counterpart to the ICSBEP, The International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. Beginning in 1999, the IRPhEP was conducted as a pilot activity by the by the Organization of Economic Cooperation and Development (OECD) Nuclear Energy
International Nuclear Information System (INIS)
J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori
2007-01-01
Since ICNC 2003, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) has continued to expand its efforts and broaden its scope. Criticality-alarm/shielding type benchmarks and fundamental physics measurements that are relevant to criticality safety applications are not only included in the scope of the project, but benchmark data are also included in the latest version of the handbook. A considerable number of improvements have been made to the searchable database, DICE and the criticality-alarm/shielding benchmarks and fundamental physics measurements have been included in the database. There were 12 countries participating on the ICSBEP in 2003. That number has increased to 18 with recent contributions of data and/or resources from Brazil, Czech Republic, Poland, India, Canada, and China. South Africa, Germany, Argentina, and Australia have been invited to participate. Since ICNC 2003, the contents of the ''International Handbook of Evaluated Criticality Safety Benchmark Experiments'' have increased from 350 evaluations (28,000 pages) containing benchmark specifications for 3070 critical or subcritical configurations to 442 evaluations (over 38,000 pages) containing benchmark specifications for 3957 critical or subcritical configurations, 23 criticality-alarm-placement/shielding configurations with multiple dose points for each, and 20 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications in the 2006 Edition of the ICSBEP Handbook. Approximately 30 new evaluations and 250 additional configurations are expected to be added to the 2007 Edition of the Handbook. Since ICNC 2003, a reactor physics counterpart to the ICSBEP, The International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. Beginning in 1999, the IRPhEP was conducted as a pilot activity by the by the Organization of Economic Cooperation and Development (OECD) Nuclear Energy Agency
Criticality calculation of the nuclear material warehouse of the ININ
International Nuclear Information System (INIS)
Garcia, T.; Angeles, A.; Flores C, J.
2013-10-01
In this work the conditions of nuclear safety were determined as much in normal conditions as in the accident event of the nuclear fuel warehouse of the reactor TRIGA Mark III of the Instituto Nacional de Investigaciones Nucleares (ININ). The warehouse contains standard fuel elements Leu - 8.5/20, a control rod with follower of standard fuel type Leu - 8.5/20, fuel elements Leu - 30/20, and the reactor fuel Sur-100. To check the subcritical state of the warehouse the effective multiplication factor (keff) was calculated. The keff calculation was carried out with the code MCNPX. (Author)
International Nuclear Information System (INIS)
Marusich, R.M. Westinghouse Hanford
1996-01-01
The purpose of this calculation note is to provide the basis for criticality consequences for the Tank Farm Safety Analysis Report (FSAR). Criticality scenario is developed and details and description of the analysis methods are provided
CRISTAL V1: Criticality package for burn up credit calculations
International Nuclear Information System (INIS)
Gomit, Jean-Michel; Cousinou, Patrick; Gantenbein, Francoise; Diop, Cheikh; Fernandez de Grado, Guy; Mijuin, Dominique; Grouiller, Jean-Paul; Marc, Andre; Toubon, Herve
2003-01-01
The first version of the CRISTAL package, created and validated as part of a joint project between IRSN, COGEMA and CEA, was delivered to users in November 1999. This fruitful cooperation between IRSN, COGEMA and CEA has been pursued until 2003 with the development and the validation of the package CRISTAL V1, whose main objectives are to improve the criticality safety studies including the Burn up Credit effect. (author)
A method for calculating the critical time under blowdown conditions during a SBLOCA
International Nuclear Information System (INIS)
Su Guanghui; Yu Zhenwan; Guo Yujun; Zhang Jinling; Qiu Suizheng; Jia Dounan
1994-01-01
The critical time is the period from the instant at which the blowdown occurs to the instant when the critical heat flux (CHF) happens. It determines the time of operating the safety protection system when a LOCA occurs in PWR. It is important to calculate the critical time correctly. The weakness of Griffith's method is analyzed and a great improvement is developed. The critical time calculated by the improved method agrees with the experimental values
Monte Carlo criticality calculations accelerated by a growing neutron population
International Nuclear Information System (INIS)
Dufek, Jan; Tuttelberg, Kaur
2016-01-01
Highlights: • Efficiency is significantly improved when population size grows over cycles. • The bias in the fission source is balanced to other errors in the source. • The bias in the fission source decays over the cycle as the population grows. - Abstract: We propose a fission source convergence acceleration method for Monte Carlo criticality simulation. As the efficiency of Monte Carlo criticality simulations is sensitive to the selected neutron population size, the method attempts to achieve the acceleration via on-the-fly control of the neutron population size. The neutron population size is gradually increased over successive criticality cycles so that the fission source bias amounts to a specific fraction of the total error in the cumulative fission source. An optimal setting then gives a reasonably small neutron population size, allowing for an efficient source iteration; at the same time the neutron population size is chosen large enough to ensure a sufficiently small source bias, such that does not limit accuracy of the simulation.
Criticality safety calculations for the nuclear waste disposal canisters
International Nuclear Information System (INIS)
Anttila, M.
1996-12-01
The criticality safety of the copper/iron canisters developed for the final disposal of the Finnish spent fuel has been studied with the MCNP4A code based on the Monte Carlo technique and with the fuel assembly burnup programs CASMO-HEX and CASMO-4. Two rather similar types of spent fuel disposal canisters have been studied. One canister type has been designed for hexagonal VVER-440 fuel assemblies used at the Loviisa nuclear power plant (IVO canister) and the other one for square BWR fuel bundles used at the Olkiluoto nuclear power plant (TVO canister). (10 refs.)
Quality plan for criticality safety calculations at Rocky Flats
International Nuclear Information System (INIS)
Pecora, D.
1978-01-01
The text of the plan is given, and some of the guidelines followed in writing it are discussed to aid others who may be faced with the same task. The plan is divided into four sections. The Introduction describes the general functions and purpose of the calculational program. The second section, Activities and Responsibilities, lists specific tasks and their purposes and assigns responsibility for performance. The third section references relevant documentation (e.g., ANSI standards), and the final section describes quality plans for specific functions
International Nuclear Information System (INIS)
Bencik, M.; Hadek, J.
2011-01-01
The paper gives a brief survey of the seventh three-dimensional AER dynamic benchmark calculation results received with the codes DYN3D and RELAP5-3D at Nuclear Research Institute Rez. This benchmark was defined at the twentieth AER Symposium in Hanassari (Finland). It is focused on investigation of transient behaviour in a WWER-440 nuclear power plant. Its initiating event is opening of the main isolation valve and re-connection of the loop with its main circulation pump in operation. The WWER-440 plant is at the end of the first fuel cycle and in hot full power conditions. Stationary and burnup calculations were performed with the code DYN3D. Transient calculation was made with the system code RELAP5-3D. The two-group homogenized cross sections library HELGD05 created by HELIOS code was used for the generation of reactor core neutronic parameters. The detailed six loops model of NPP Dukovany was adopted for the seventh AER dynamic benchmark purposes. The RELAP5-3D full core neutronic model was coupled with 49 core thermal-hydraulic channels and 8 reflector channels connected with the three-dimensional model of the reactor vessel. The detailed nodalization of reactor downcomer, lower and upper plenum was used. Mixing in lower and upper plenum was simulated. The first part of paper contains a brief characteristic of RELAP5-3D system code and a short description of NPP input deck and reactor core model. The second part shows the time dependencies of important global and local parameters. (Authors)
Directory of Open Access Journals (Sweden)
Maria Avramova
2013-01-01
Full Text Available Over the last few years, the Pennsylvania State University (PSU under the sponsorship of the US Nuclear Regulatory Commission (NRC has prepared, organized, conducted, and summarized two international benchmarks based on the NUPEC data—the OECD/NRC Full-Size Fine-Mesh Bundle Test (BFBT Benchmark and the OECD/NRC PWR Sub-Channel and Bundle Test (PSBT Benchmark. The benchmarks’ activities have been conducted in cooperation with the Nuclear Energy Agency/Organization for Economic Co-operation and Development (NEA/OECD and the Japan Nuclear Energy Safety (JNES Organization. This paper presents an application of the joint Penn State University/Technical University of Madrid (UPM version of the well-known sub-channel code COBRA-TF (Coolant Boiling in Rod Array-Two Fluid, namely, CTF, to the steady state critical power and departure from nucleate boiling (DNB exercises of the OECD/NRC BFBT and PSBT benchmarks. The goal is two-fold: firstly, to assess these models and to examine their strengths and weaknesses; and secondly, to identify the areas for improvement.
International Nuclear Information System (INIS)
Neuber, Jens Christian; Tippl, Wolfgang; Hemptinne, Gwendoline de; Maes, Philippe; Ranta-aho, Anssu; Peneliau, Yannick; Jutier, Ludyvine; Tardy, Marcel; Reiche, Ingo; Kroeger, Helge; Nakata, Tetsuo; Armishaw, Malcom; Miller, Thomas M.
2015-01-01
a discussion of the spread of the eff k results. Following this, the evaluation of the end effect is accomplished starting with a discussion of the spread of the end effect results following from the eff k results. Then the functional dependence of the end effect on the control rod insertion depth is described by introducing and deriving model functions. After that the fission density results are evaluated by introducing and deriving fission density model functions describing the axial fission probability density for the different control rod insertion depths. Using these fission density model functions the fission probability content of the top end region of the active zone of the fuel assemblies is estimated. Predictions of the qualitative behaviour of the neutron multiplication factor and the end effect as a function of the control rod insertion depth were already made in the Phase II-C report and verified in this report which demonstrates the practical relevance of the relations established in the Phase II-C report. In addition, it turns out that parameters describing the average burn-up transformation characteristics of these relations play important roles in comparisons of the end effect model functions derived for the two Phase II-E axial burn-up profiles. Thus, the Phase II-E benchmark exercise complements the Phase II-C and Phase II-D benchmark exercises. The applicability of the knowledge gained from the results of all these three exercises to burn-up credit criticality safety design calculations is demonstrated
International Nuclear Information System (INIS)
Lee, Kyung-Hoon; Kim, Kang-Seog; Cho, Jin-Young; Song, Jae-Seung; Noh, Jae-Man; Lee, Chung-Chan
2008-01-01
The IAEA's gas-cooled reactor program has coordinated international cooperation for an evaluation of a high temperature gas-cooled reactor's performance, which includes a validation of the physics analysis codes and the performance models for the proposed GT-MHR. This benchmark problem consists of the pin and block calculations and the reactor physics of the control rod worth for the GT-MHR with a weapon grade plutonium fuel. Benchmark analysis has been performed by using the HELIOS/MASTER deterministic code package and the MCNP Monte Carlo code. The deterministic code package adopts a conventional 2-step procedure in which a few group constants are generated by a transport lattice calculation, and the reactor physics analysis is performed by a 3-dimensional diffusion calculation. In order to solve particular modeling issues in GT-MHR, recently developed technologies were utilized and new analysis procedure was devised. Double heterogeneity effect could be covered by using the reactivity-equivalent physical transformation (RPT) method. Strong core-reflector interaction could be resolved by applying an equivalence theory to the generation of the reflector cross sections. In order to accurately handle with very large control rods which are asymmetrically located in a fuel and a reflector block, the surface dependent discontinuity factors (SDFs) were considered in applying an equivalence theory. A new method has been devised to consider SDFs without any modification of the nodal solver in MASTER. All computational results of the HELIOS/MASTER code package were compared with those of MCNP. The multiplication factors of HELIOS for the pin cells are in very good agreement with those of MCNP to within a maximum error of 693 pcm Δρ. The maximum differences of the multiplication factors for the fuel blocks are about 457 pcm Δρ and the control rod worths of HELIOS are consistent with those of MCNP to within a maximum error of 3.09%. On considering a SDF in the core
International Nuclear Information System (INIS)
Khan, M.J.H.; Sarker, M.M.; Islam, S.M.A.
2013-01-01
Highlights: ► To validate the SRAC2006 code system for TRIGA neutronics calculations. ► TRX and BAPL lattices are treated as standard benchmarks for this purpose. ► To compare the calculated results with experiment as well as MCNP values in this study. ► The study demonstrates a good agreement with the experiment and the MCNP results. ► Thus, this analysis reflects the validation study of the SRAC2006 code system. - Abstract: The goal of this study is to present the validation study of the SRAC2006 code system based on evaluated nuclear data libraries ENDF/B-VII.0 and JENDL-3.3 for neutronics analysis of TRIGA Mark-II Research Reactor at AERE, Bangladesh. This study is achieved through the analysis of integral parameters of TRX and BAPL benchmark lattices of thermal reactors. In integral measurements, the thermal reactor lattices TRX-1, TRX-2, BAPL-UO 2 -1, BAPL-UO 2 -2 and BAPL-UO 2 -3 are treated as standard benchmarks for validating/testing the SRAC2006 code system as well as nuclear data libraries. The integral parameters of the said lattices are calculated using the collision probability transport code PIJ of the SRAC2006 code system at room temperature 20 °C based on the above libraries. The calculated integral parameters are compared to the measured values as well as the MCNP values based on the Chinese evaluated nuclear data library CENDL-3.0. It was found that in most cases, the values of integral parameters demonstrate a good agreement with the experiment and the MCNP results. In addition, the group constants in SRAC format for TRX and BAPL lattices in fast and thermal energy range respectively are compared between the above libraries and it was found that the group constants are identical with very insignificant difference. Therefore, this analysis reflects the validation study of the SRAC2006 code system based on evaluated nuclear data libraries JENDL-3.3 and ENDF/B-VII.0 and can also be essential to implement further neutronics calculations
Criticality Calculations for a Typical Nuclear Fuel Fabrication Plant with Low Enriched Uranium
International Nuclear Information System (INIS)
Elsayed, Hade; Nagy, Mohamed; Agamy, Said; Shaat, Mohmaed
2013-01-01
The operations with the fissile materials such as U 235 introduce the risk of a criticality accident that may be lethal to nearby personnel and can lead the facility to shutdown. Therefore, the prevention of a nuclear criticality accident should play a major role in the design of a nuclear facility. The objectives of criticality safety are to prevent a self-sustained nuclear chain reaction and to minimize the consequences. Sixty criticality accidents were occurred in the world. These are accidents divided into two categories, 22 accidents occurred in process facilities and 38 accidents occurred during critical experiments or operations with research reactor. About 21 criticality accidents including Japan Nuclear Fuel Conversion Co. (JCO) accident took place with fuel solution or slurry and only one accident occurred with metal fuel. In this study the nuclear criticality calculations have been performed for a typical nuclear fuel fabrication plant producing nuclear fuel elements for nuclear research reactors with low enriched uranium up to 20%. The calculations were performed for both normal and abnormal operation conditions. The effective multiplication factor (k eff ) during the nuclear fuel fabrication process (Uranium hexafluoride - Ammonium Diuranate conversion process) was determined. Several accident scenarios were postulated and the criticalities of these accidents were evaluated. The computer code MCNP-4B which based on Monte Carlo method was used to calculate neutron multiplication factor. The criticality calculations Monte Carlo method was used to calculate neutron multiplication factor. The criticality calculations were performed for the cases of, change of moderator to fuel ratio, solution density and concentration of the solute in order to prevent or mitigate criticality accidents during the nuclear fuel fabrication process. The calculation results are analyzed and discussed
International Nuclear Information System (INIS)
Close, D.A.; Booth, T.E.; Caldwell, J.T.
1981-01-01
It was determined that the criticality hazard associated with the Slagging Pyrolysis Incinerator (SPI) Facility would be minimal if a three-level criticality-hazard prevention program were implemented. The first strategy consists of screening all incoming wastes for fissile content. The second prevention level is provided by introducing a small concentration of a neutron-absorbing compound, such as B 2 O 3 , into the input waste stream. The third prevention level is provided by direct criticality-hazard monitoring using sensitive neutron detectors in all regions of the facility where a significant hazard has been identified - principally the drying, pyrolysis, and slag regions. The facility could be shut down rapidly for cleanout if the measurements indicate an unsafe condition is developing. The criticality safety provided by the product of these three independent measures should reduce the hazard to a negligible level
International Nuclear Information System (INIS)
Daavittila, Antti; Haemaelaeinen, Anitta; Kyrki-Rajamaeki, Riitta
2003-01-01
All of the three exercises of the Organization for Economic Cooperation and Development/Nuclear Regulatory Commission pressurized water reactor main steam line break (PWR MSLB) benchmark were calculated at VTT, the Technical Research Centre of Finland. For the first exercise, the plant simulation with point-kinetic neutronics, the thermal-hydraulics code SMABRE was used. The second exercise was calculated with the three-dimensional reactor dynamics code TRAB-3D, and the third exercise with the combination TRAB-3D/SMABRE. VTT has over ten years' experience of coupling neutronic and thermal-hydraulic codes, but this benchmark was the first time these two codes, both developed at VTT, were coupled together. The coupled code system is fast and efficient; the total computation time of the 100-s transient in the third exercise was 16 min on a modern UNIX workstation. The results of all the exercises are similar to those of the other participants. In order to demonstrate the effect of secondary circuit modeling on the results, three different cases were calculated. In case 1 there is no phase separation in the steam lines and no flow reversal in the aspirator. In case 2 the flow reversal in the aspirator is allowed, but there is no phase separation in the steam lines. Finally, in case 3 the drift-flux model is used for the phase separation in the steam lines, but the aspirator flow reversal is not allowed. With these two modeling variations, it is possible to cover a remarkably broad range of results. The maximum power level reached after the reactor trip varies from 534 to 904 MW, the range of the time of the power maximum being close to 30 s. Compared to the total calculated transient time of 100 s, the effect of the secondary side modeling is extremely important
Energy Technology Data Exchange (ETDEWEB)
Kaneko, Masashi [Japan Atomic Energy Agency, Nuclear Science and Engineering Center (Japan); Yasuhara, Hiroki; Miyashita, Sunao; Nakashima, Satoru, E-mail: snaka@hiroshima-u.ac.jp [Hiroshima University, Graduate School of Science (Japan)
2017-11-15
The present study applies all-electron relativistic DFT calculation with Douglas-Kroll-Hess (DKH) Hamiltonian to each ten sets of Ru and Os compounds. We perform the benchmark investigation of three density functionals (BP86, B3LYP and B2PLYP) using segmented all-electron relativistically contracted (SARC) basis set with the experimental Mössbauer isomer shifts for {sup 99}Ru and {sup 189}Os nuclides. Geometry optimizations at BP86 theory of level locate the structure in a local minimum. We calculate the contact density to the wavefunction obtained by a single point calculation. All functionals show the good linear correlation with experimental isomer shifts for both {sup 99}Ru and {sup 189}Os. Especially, B3LYP functional gives a stronger correlation compared to BP86 and B2PLYP functionals. The comparison of contact density between SARC and well-tempered basis set (WTBS) indicated that the numerical convergence of contact density cannot be obtained, but the reproducibility is less sensitive to the choice of basis set. We also estimate the values of ΔR/R, which is an important nuclear constant, for {sup 99}Ru and {sup 189}Os nuclides by using the benchmark results. The sign of the calculated ΔR/R values is consistent with the predicted data for {sup 99}Ru and {sup 189}Os. We obtain computationally the ΔR/R values of {sup 99}Ru and {sup 189}Os (36.2 keV) as 2.35×10{sup −4} and −0.20×10{sup −4}, respectively, at B3LYP level for SARC basis set.
Lorenz, Marco; Civalleri, Bartolomeo; Maschio, Lorenzo; Sgroi, Mauro; Pullini, Daniele
2014-09-15
The physisorption of water on graphene is investigated with the hybrid density functional theory (DFT)-functional B3LYP combined with empirical corrections, using moderate-sized basis sets such as 6-31G(d). This setup allows to model the interaction of water with graphene going beyond the quality of classical or semiclassical simulations, while still keeping the computational costs under control. Good agreement with respect to Coupled Cluster with singles and doubles excitations and perturbative triples (CCSD(T)) results is achieved for the adsorption of a single water molecule in a benchmark with two DFT-functionals (Perdew/Burke/Ernzerhof (PBE), B3LYP) and Grimme's empirical dispersion and counterpoise corrections. We apply the same setting to graphene supported by epitaxial hexagonal boron nitride (h-BN), leading to an increased interaction energy. To further demonstrate the achievement of the empirical corrections, we model, entirely from first principles, the electronic properties of graphene and graphene supported by h-BN covered with different amounts of water (one, 10 water molecules per cell and full coverage). The effect of h-BN on these properties turns out to be negligibly small, making it a good candidate for a substrate to grow graphene on. Copyright © 2014 Wiley Periodicals, Inc.
Perera, Ajith; Gauss, Jürgen; Verma, Prakash; Morales, Jorge A.
2017-04-01
We present a parallel implementation to compute electron spin resonance g-tensors at the coupled-cluster singles and doubles (CCSD) level which employs the ACES III domain-specific software tools for scalable parallel programming, i.e., the super instruction architecture language and processor (SIAL and SIP), respectively. A unique feature of the present implementation is the exact (not approximated) inclusion of the five one- and two-particle contributions to the g-tensor [i.e., the mass correction, one- and two-particle paramagnetic spin-orbit, and one- and two-particle diamagnetic spin-orbit terms]. Like in a previous implementation with effective one-electron operators [J. Gauss et al., J. Phys. Chem. A 113, 11541-11549 (2009)], our implementation utilizes analytic CC second derivatives and, therefore, classifies as a true CC linear-response treatment. Therefore, our implementation can unambiguously appraise the accuracy of less costly effective one-particle schemes and provide a rationale for their widespread use. We have considered a large selection of radicals used previously for benchmarking purposes including those studied in earlier work and conclude that at the CCSD level, the effective one-particle scheme satisfactorily captures the two-particle effects less costly than the rigorous two-particle scheme. With respect to the performance of density functional theory (DFT), we note that results obtained with the B3LYP functional exhibit the best agreement with our CCSD results. However, in general, the CCSD results agree better with the experimental data than the best DFT/B3LYP results, although in most cases within the rather large experimental error bars.
Jansky, Bohumil; Rejchrt, Jiri; Novak, Evzen; Losa, Evzen; Blokhin, Anatoly I.; Mitenkova, Elena
2017-09-01
The leakage neutron spectra measurements have been done on benchmark spherical assemblies - iron spheres with diameter of 20, 30, 50 and 100 cm. The Cf-252 neutron source was placed into the centre of iron sphere. The proton recoil method was used for neutron spectra measurement using spherical hydrogen proportional counters with diameter of 4 cm and with pressure of 400 and 1000 kPa. The neutron energy range of spectrometer is from 0.1 to 1.3 MeV. This energy interval represents about 85 % of all leakage neutrons from Fe sphere of diameter 50 cm and about of 74% for Fe sphere of diameter 100 cm. The adequate MCNP neutron spectra calculations based on data libraries CIELO, JEFF-3.2 and ENDF/B-VII.1 were done. Two calculations were done with CIELO library. The first one used data for all Fe-isotopes from CIELO and the second one (CIELO-56) used only Fe-56 data from CIELO and data for other Fe isotopes were from ENDF/B-VII.1. The energy structure used for calculations and measurements was 40 gpd (groups per decade) and 200 gpd. Structure 200 gpd represents lethargy step about of 1%. This relatively fine energy structure enables to analyze the Fe resonance neutron energy structure. The evaluated cross section data of Fe were validated on comparisons between the calculated and experimental spectra.
Calculation and Mapping of Critical Thresholds in Europe: Status Report 1999
Posch M; Smet AMP de; Hettelingh J-P; Downing RJ 000; MNV
1999-01-01
This report is the fifth in a bi-annual series prepared by the Coordination Center for Effects (CCE) to document the progress made in calculating and mapping critical thresholds in Europe. The CCE, as part of the Mapping Programme under the UN/ECE Working Group on Effects (WGE), collects critical
SPENT NUCLEAR FUEL NUMBER DENSITIES FOR MULTI-PURPOSE CANISTER CRITICALITY CALCULATIONS
International Nuclear Information System (INIS)
D. A. Thomas
1996-01-01
The purpose of this analysis is to calculate the number densities for spent nuclear fuel (SNF) to be used in criticality evaluations of the Multi-Purpose Canister (MPC) waste packages. The objective of this analysis is to provide material number density information which will be referenced by future MPC criticality design analyses, such as for those supporting the Conceptual Design Report
International Nuclear Information System (INIS)
Hoffman, E.L.; Ammerman, D.J.
1995-01-01
A series of tests investigating dynamic pulse buckling of a cylindrical shell under axial impact is compared to several 2D and 3D finite element simulations of the event. The purpose of the work is to investigate the performance of various analysis codes and element types on a problem which is applicable to radioactive material transport packages, and ultimately to develop a benchmark problem to qualify finite element analysis codes for the transport package design industry. During the pulse buckling tests, a buckle formed at each end of the cylinder, and one of the two buckles became unstable and collapsed. Numerical simulations of the test were performed using PRONTO, a Sandia developed transient dynamics analysis code, and ABAQUS/Explicit with both shell and continuum elements. The calculations are compared to the tests with respect to deformed shape and impact load history
Jansky, B; Turzik, Z; Kyncl, J; Cvachovec, F; Trykov, L A; Volkov, V S
2002-01-01
The neutron and gamma spectra measurements have been made for benchmark iron spherical assemblies with the diameter of 30, 50 and 100 cm. The sup 2 sup 5 sup 2 Cf neutron sources with different emissions were placed into the centre of iron spheres. In the first stage of the project, independent laboratories took part in the leakage spectra measurements. The proton recoil method was used with stilbene crystals and hydrogen proportional counters. The working range of spectrometers for neutrons is in energy range from 0.01 to 16 MeV, and for gamma from 0.40 to 12 MeV. Some adequate calculations have been carried out. The propose to carefully analyse the leakage mixed neutron and gamma spectrum from iron sphere of diameter 50 cm and then adopt that field as standard.
Evaluation of approaches to calculate critical metal loads for forest soils
Vries, de W.; Groenenberg, J.E.
2009-01-01
This paper evaluates approaches to calculate acceptable loads for metal deposition to forest ecosystems, distinguishing between critical loads, stand-still loads and target loads. We also evaluated the influence of including the biochemical metal cycle on the calculated loads. Differences are
Directory of Open Access Journals (Sweden)
Wonkyeong Kim
2015-01-01
Full Text Available A high-leakage core has been known to be a challenging problem not only for a two-step homogenization approach but also for a direct heterogeneous approach. In this paper the DIMPLE S06 core, which is a small high-leakage core, has been analyzed by a direct heterogeneous modeling approach and by a two-step homogenization modeling approach, using contemporary code systems developed for reactor core analysis. The focus of this work is a comprehensive comparative analysis of the conventional approaches and codes with a small core design, DIMPLE S06 critical experiment. The calculation procedure for the two approaches is explicitly presented in this paper. Comprehensive comparative analysis is performed by neutronics parameters: multiplication factor and assembly power distribution. Comparison of two-group homogenized cross sections from each lattice physics codes shows that the generated transport cross section has significant difference according to the transport approximation to treat anisotropic scattering effect. The necessity of the ADF to correct the discontinuity at the assembly interfaces is clearly presented by the flux distributions and the result of two-step approach. Finally, the two approaches show consistent results for all codes, while the comparison with the reference generated by MCNP shows significant error except for another Monte Carlo code, SERPENT2.
International Nuclear Information System (INIS)
Bitter, M.; Gu, M.F.; Vainshtein, L.A.; Beiersdorfer, P.; Bertschinger, G.; Marchuk, O.; Bell, R.; LeBlanc, B.; Hill, K.W.; Johnson, D.; Roquemore, L.
2003-01-01
Dielectronic satellite spectra of helium-like argon, recorded with a high-resolution X-ray crystal spectrometer at the National Spherical Torus Experiment, were found to be inconsistent with existing predictions resulting in unacceptable values for the power balance and suggesting the unlikely existence of non-Maxwellian electron energy distributions. These problems were resolved with calculations from a new atomic code. It is now possible to perform reliable electron temperature measurements and to eliminate the uncertainties associated with determinations of non-Maxwellian distributions
Experimental and computational benchmark tests
International Nuclear Information System (INIS)
Gilliam, D.M.; Briesmeister, J.F.
1994-01-01
A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252 Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235 U, 239 Pu, 238 U, and 237 Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel
Development of common user data model for APOLLO3 and MARBLE and application to benchmark problems
International Nuclear Information System (INIS)
Yokoyama, Kenji
2009-07-01
A Common User Data Model, CUDM, has been developed for the purpose of benchmark calculations between APOLLO3 and MARBLE code systems. The current version of CUDM was designed for core calculation benchmark problems with 3-dimensional Cartesian, 3-D XYZ, geometry. CUDM is able to manage all input/output data such as 3-D XYZ geometry, effective macroscopic cross section, effective multiplication factor and neutron flux. In addition, visualization tools for geometry and neutron flux were included. CUDM was designed by the object-oriented technique and implemented using Python programming language. Based on the CUDM, a prototype system for a benchmark calculation, CUDM-benchmark, was also developed. The CUDM-benchmark supports input/output data conversion for IDT solver in APOLLO3, and TRITAC and SNT solvers in MARBLE. In order to evaluate pertinence of CUDM, the CUDM-benchmark was applied to benchmark problems proposed by T. Takeda, G. Chiba and I. Zmijarevic. It was verified that the CUDM-benchmark successfully reproduced the results calculated with reference input data files, and provided consistent results among all the solvers by using one common input data defined by CUDM. In addition, a detailed benchmark calculation for Chiba benchmark was performed by using the CUDM-benchmark. Chiba benchmark is a neutron transport benchmark problem for fast criticality assembly without homogenization. This benchmark problem consists of 4 core configurations which have different sodium void regions, and each core configuration is defined by more than 5,000 fuel/material cells. In this application, it was found that the results by IDT and SNT solvers agreed well with the reference results by Monte-Carlo code. In addition, model effects such as quadrature set effect, S n order effect and mesh size effect were systematically evaluated and summarized in this report. (author)
Energy Technology Data Exchange (ETDEWEB)
Elsawi, Mohamed A., E-mail: Mohamed.elsawi@kustar.ac.ae; Hraiz, Amal S. Bin, E-mail: Amal.Hraiz@kustar.ac.ae
2015-11-15
Highlights: • AP1000 core configuration is challenging due to its high degree of heterogeneity. • The proposed code was used to model neutronics/TH behavior of the AP1000 reactor. • Enhanced modeling features in WIMS9 facilitated neutronics modeling of the reactor. • PARCS/TRACE coupled code system was used to model the temperature feedback effects. • Final results showed reasonable agreement with publically available reactor data. - Abstract: The objective of this paper is to assess the accuracy of the WIMS9/PARCS/TRACE code system for power density calculations of the Westinghouse AP1000™ nuclear reactor, as a representative of modern pressurized water reactors (Gen III+). The cross section libraries were generated using the lattice physics code WIMS9 (the commercial version of the legacy lattice code WIMSD). Nine different fuel assembly types were analyzed in WIMS9 to generate the two-group cross sections required by the PARCS core simulator. The nine fuel assembly types were identified based on the distribution of Pyrex discrete burnable absorber (Borosilicate glass) and integral fuel burnable absorber (IFBA) rods in each fuel assembly. The generated cross sections were passed to the coupled core simulator PARCS/TRACE which performed 3-D, full-core diffusion calculations from within the US NRC Symbolic Nuclear Analysis Package (SNAP) interface. The results which included: assembly power distribution, effective multiplication factor (k{sub eff}), radial and axial power density, and whole core depletion were compared to reference Monte Carlo results and to a published reactor data available in the AP1000 Design Control Document (DCD). The results of the study show acceptable accuracy of the WIMS9/PARCS/TRACE code in predicting the power density of the AP1000 core and, hence, establish its adequacy in the evaluation of the neutronics parameters of modern PWRs of similar designs. The work reported here is new in that it uses, for the first time, the
Directory of Open Access Journals (Sweden)
Wiji Suwarno
2017-02-01
Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.
Shielding benchmark problems, (2)
International Nuclear Information System (INIS)
Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.
1980-02-01
Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)
Calculated minimum critical masses of 239Pu homogeneously mixed with polyethylene moderator
International Nuclear Information System (INIS)
Gundy, L.M.; Goslen, A.Q.
1997-01-01
The minimum critical masses of Plutonium 239 in a polyethylene moderator were calculated as a function of plutonium density for several polyethylene densities (various void fractions). This study has applications for solid transuranic (TRU) waste and for plutonium scrap dissolving operations where polyethylene bags may be present within the cans to be dissolved. Polyethylene is usually present in TRU waste as a result of glovebox bagout operations and as liners in 55 gallon drums. The methodology utilized the SCALE driver CSAS1X. MCNP4A with ENDF/B-V was also used as an independent check due to the lack of critical experiments for polyethylene moderator. For TRU solid waste, tests with 55 gallon drums indicate that polyethylene bagging cannot be tightly stuffed to a volume fraction greater than 15.5%. To allow for settling, calculations were conducted for 20% and 50% polyethylene volume fractions as well as for full density for comparison. The effects of 10% concrete or steel mixed with 20% polyethylene were also evaluated. Since water egress into underground solid waste containers would be possible, additional calculations evaluated critical masses of water in polyethylene moderator. Calculated critical masses for the various moderators were determined for a range of plutonium concentrations. For full density polyethylene, the minimum critical plutonium mass is about 345 grams (at 30 grams per liter) versus 510 for water. Added concrete substantially decreases the critical mass, and added steel substantially increases the critical mass. This study indicates that in some situations the minimum critical plutonium mass in polyethylene can be less than that of metal in water (about 510 grams). TRU waste fissile limits are usually based on safe masses determined from plutonium in water so that this result has obvious implications on criticality safety. 1 fig
Kosar, Naveen; Mahmood, Tariq; Ayub, Khurshid
2017-12-01
Benchmark study has been carried out to find a cost effective and accurate method for bond dissociation energy (BDE) of carbon halogen (Csbnd X) bond. BDE of C-X bond plays a vital role in chemical reactions, particularly for kinetic barrier and thermochemistry etc. The compounds (1-16, Fig. 1) with Csbnd X bond used for current benchmark study are important reactants in organic, inorganic and bioorganic chemistry. Experimental data of Csbnd X bond dissociation energy is compared with theoretical results. The statistical analysis tools such as root mean square deviation (RMSD), standard deviation (SD), Pearson's correlation (R) and mean absolute error (MAE) are used for comparison. Overall, thirty-one density functionals from eight different classes of density functional theory (DFT) along with Pople and Dunning basis sets are evaluated. Among different classes of DFT, the dispersion corrected range separated hybrid GGA class along with 6-31G(d), 6-311G(d), aug-cc-pVDZ and aug-cc-pVTZ basis sets performed best for bond dissociation energy calculation of C-X bond. ωB97XD show the best performance with less deviations (RMSD, SD), mean absolute error (MAE) and a significant Pearson's correlation (R) when compared to experimental data. ωB97XD along with Pople basis set 6-311g(d) has RMSD, SD, R and MAE of 3.14 kcal mol-1, 3.05 kcal mol-1, 0.97 and -1.07 kcal mol-1, respectively.
Verification and validation benchmarks.
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, William Louis; Trucano, Timothy Guy
2007-02-01
Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of
Verification and validation benchmarks
International Nuclear Information System (INIS)
Oberkampf, William L.; Trucano, Timothy G.
2008-01-01
Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the
DEFF Research Database (Denmark)
Hougaard, Jens Leth; Tvede, Mich
2002-01-01
Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...
Calculation of critical experiment parameters for the High Flux Isotope Reactor
International Nuclear Information System (INIS)
Primm, R.T. III.
1991-01-01
Six critical experiments were performed shortly before the initial ascension to power of the High Flux Isotope Reactor (HFIR). Critical configurations were determined at various control rod positions by varying the soluble boron content in the light water coolant. Calculated k-effective was 2% high at beginning-of-life (BOL) typical conditions, but was 1.0 at end-of-life (EOL) typical conditions. Axially averaged power distributions for a given radial location were frequently within experimental error. At specific r,z locations with the core, the calculated power densities were significantly different from the experimentally derived values. A reassessment of the foil activation data seems desirable
Criticality calculation for cluster fuel bundles using monte carlo generated grey dancoff factor
International Nuclear Information System (INIS)
Kim, Hyeong Heon; Cho, Nam Zin
1999-01-01
The grey Dancoff factor calculated by Monte Carlo method is applied to the criticality calculation for cluster fuel bundles. Dancoff factors for five symmetrically different pin positions of CANDU37 and CANFLEX fuel bundles in full three-dimensional geometry are calculated by Monte Carlo method. The concept of equivalent Dancoff factor is introduced to use the grey Dancoff factor in the resonance calculation based on equivalence theorem. The equivalent Dancoff factor which is based on the realistic model produces an exact fuel collision probability and can be used in the resonance calculation just as the black Dancoff factor. The infinite multiplication factors based on the black Dancoff factors calculated by collision probability or Monte Carlo method are overestimated by about 2mk for normal condition and 4mk for void condition of CANDU37 and CANFLEX fuel bundles in comparison with those based on the equivalent Dancoff factors
Calculation of the ingestion critical dose rate for the Goiania radioactive waste repository
International Nuclear Information System (INIS)
Passos, E.M. dos; Martin Alves, A.S. De
1994-01-01
The calculation results of the critical distance for the ingestion dose rate due to a hypothetical Cs-137 release from the Abadia de Goias repository are shown. The work is based on the pathway repository-aquifer-well food chain. The calculations were based upon analytical models for the migration of radioisotopes through the aquifer and for its transfer from well water to food. (author)
Berger, E.; Brenne, T.; Heath, A.; Hochholdinger, B.; Kassem-Manthey, K.; Keßler, L.; Koch, N.; Kortmann, G.; Kröff, A.; Otto, T.; Steinbeck, G.; Till, E.; Verhoeven, H.; Vu, T.-C.; Wiegand, K.
2005-08-01
To increase the accuracy of finite element simulations in daily practice the local German and Austrian Deep Drawing Research Groups of IDDRG founded a special Working Group in year 2000. The main objective of this group was the continuously ongoing study and discussion of numerical / material effects in simulation jobs and to work out possible solutions. As a first theme of this group the intensive study of small die radii and the possibility of detecting material failure in these critical forming positions was selected. The part itself is a fictional body panel outside in which the original door handle of the VW Golf A4 has been constructed, a typical position of possible material necking or rupture in the press shop. All conditions to do a successful simulation have been taken care of in advance, material data, boundary conditions, friction, FLC and others where determined for the two materials in investigation — a mild steel and a dual phase steel HXT500X. The results of the experiments have been used to design the descriptions of two different benchmark runs for the simulation. The simulations with different programs as well as with different parameters showed on one hand negligible and on the other hand parameters with strong impact on the result — thereby having a different impact on a possible material failure prediction.
Criticality coefficient calculation for a small PWR using Monte Carlo Transport Code
Energy Technology Data Exchange (ETDEWEB)
Trombetta, Debora M.; Su, Jian, E-mail: dtrombetta@nuclear.ufrj.br, E-mail: sujian@nuclear.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil); Chirayath, Sunil S., E-mail: sunilsc@tamu.edu [Department of Nuclear Engineering and Nuclear Security Science and Policy Institute, Texas A and M University, TX (United States)
2015-07-01
Computational models of reactors are increasingly used to predict nuclear reactor physics parameters responsible for reactivity changes which could lead to accidents and losses. In this work, preliminary results for criticality coefficient calculation using the Monte Carlo transport code MCNPX were presented for a small PWR. The computational modeling developed consists of the core with fuel elements, radial reflectors, and control rods inside a pressure vessel. Three different geometries were simulated, a single fuel pin, a fuel assembly and the core, with the aim to compare the criticality coefficients among themselves.The criticality coefficients calculated were: Doppler Temperature Coefficient, Coolant Temperature Coefficient, Coolant Void Coefficient, Power Coefficient, and Control Rod Worth. The coefficient values calculated by the MCNP code were compared with literature results, showing good agreement with reference data, which validate the computational model developed and allow it to be used to perform more complex studies. Criticality Coefficient values for the three simulations done had little discrepancy for almost all coefficients investigated, the only exception was the Power Coefficient. Preliminary results presented show that simple modelling as a fuel assembly can describe changes at almost all the criticality coefficients, avoiding the need of a complex core simulation. (author)
Weterings, Peter J J M; Loftus, Christine; Lewandowski, Thomas A
2016-08-22
Potential adverse effects of chemical substances on thyroid function are usually examined by measuring serum levels of thyroid-related hormones. Instead, recent risk assessments for thyroid-active chemicals have focussed on iodine uptake inhibition, an upstream event that by itself is not necessarily adverse. Establishing the extent of uptake inhibition that can be considered de minimis, the chosen benchmark response (BMR), is therefore critical. The BMR values selected by two international advisory bodies were 5% and 50%, a difference that had correspondingly large impacts on the estimated risks and health-based guidance values that were established. Potential treatment-related inhibition of thyroidal iodine uptake is usually determined by comparing thyroidal uptake of radioactive iodine (RAIU) during treatment with a single pre-treatment RAIU value. In the present study it is demonstrated that the physiological intra-individual variation in iodine uptake is much larger than 5%. Consequently, in-treatment RAIU values, expressed as a percentage of the pre-treatment value, have an inherent variation, that needs to be considered when conducting dose-response analyses. Based on statistical and biological considerations, a BMR of 20% is proposed for benchmark dose analysis of human thyroidal iodine uptake data, to take the inherent variation in relative RAIU data into account. Implications for the tolerated daily intakes for perchlorate and chlorate, recently established by the European Food Safety Authority (EFSA), are discussed. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.
Sakamoto, Y
2002-01-01
In the prevention of nuclear disaster, there needs the information on the dose equivalent rate distribution inside and outside the site, and energy spectra. The three dimensional radiation transport calculation code is a useful tool for the site specific detailed analysis with the consideration of facility structures. It is important in the prediction of individual doses in the future countermeasure that the reliability of the evaluation methods of dose equivalent rate distribution and energy spectra by using of Monte Carlo radiation transport calculation code, and the factors which influence the dose equivalent rate distribution outside the site are confirmed. The reliability of radiation transport calculation code and the influence factors of dose equivalent rate distribution were examined through the analyses of critical accident at JCO's uranium processing plant occurred on September 30, 1999. The radiation transport calculations including the burn-up calculations were done by using of the structural info...
International Nuclear Information System (INIS)
Takada, Tomoyuki; Yoshiyama, Hiroshi; Miyoshi, Yoshinori; Katakura, Jun-ichi
2003-01-01
Criticality safety evaluation code system JACS was developed by JAERI. Its accuracy evaluation was performed in 1980's. Although the evaluation of JACS was performed for various critical systems, the comparisons with continuous energy Monte Carlo code were not performed because such code was not developed those days. The comparisons are presented in this paper about the heterogeneous and homogeneous system containing U+Pu nitrate solutions. (author)
Critical mass calculations for 241Am, 242mAm and 243Am
International Nuclear Information System (INIS)
Dias, Hemanth; Tancock, Nigel; Clayton, Angela
2003-01-01
Criticality mass calculations are reported for 241 Am, 242m Am and 243 Am using the MONK and MCNP computer codes with the UKNDL, JEF-2.2, ENDF/B-VI and JENDL-3.2 nuclear data libraries. Results are reported for spheres of americium metal and dioxide in bare, water reflected and steel reflected systems. Comparison of results led to the identification of a serious inconsistency in the 241 Am ENDF/B-VI DICE library used by MONK - this demonstrates the importance of using different codes to verify critical mass calculations. The 241 Am critical mass estimates obtained using UKNDL and ENDF/B-VI show good agreement with experimentally inferred data, whilst both JEF-2.2 and JENDL-3.2 produce higher estimates of critical mass. The computed critical mass estimates for 242m Am obtained using ENDF/B-VI are lower than the results produced using the other nuclear data libraries - the ENDF/B-VI fission cross-section for 242m Am is significantly higher than the other evaluations in the fast region and is not supported by recent experimental data. There is wide variation in the computed 243 Am critical mass estimates suggesting that there is still considerable uncertainty in the 243 Am nuclear data. (author)
DEFF Research Database (Denmark)
Agrell, Per J.; Bogetoft, Peter
2017-01-01
Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators...
DEFF Research Database (Denmark)
Lawson, Lartey; Nielsen, Kurt
2005-01-01
We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...
International Nuclear Information System (INIS)
Koponen, B.L.; Hampel, V.E.
1982-01-01
This compilation contains 688 complete summaries of papers on nuclear criticality safety as presented at meetings of the American Nuclear Society (ANS). The selected papers contain criticality parameters for fissile materials derived from experiments and calculations, as well as criticality safety analyses for fissile material processing, transport, and storage. The compilation was developed as a component of the Nuclear Criticality Information System (NCIS) now under development at the Lawrence Livermore National Laboratory. The compilation is presented in two volumes: Volume 1 contains a directory to the ANS Transaction volume and page number where each summary was originally published, the author concordance, and the subject concordance derived from the keyphrases in titles. Volume 2 contains - in chronological order - the full-text summaries, reproduced here by permission of the American Nuclear Society from their Transactions, volumes 1-41
International Nuclear Information System (INIS)
Dumonteil, E.; Le Peillet, A.; Lee, Y. K.; Petit, O.; Jouanne, C.; Mazzolo, A.
2006-01-01
The measurement of the stationarity of Monte Carlo fission source distributions in k eff calculations plays a central role in the ability to discriminate between fake and 'true' convergence (in the case of a high dominant ratio or in case of loosely coupled systems). Recent theoretical developments have been made in the study of source convergence diagnostics, using Shannon entropy. We will first recall those results, and we will then generalize them using the expression of Boltzmann entropy, highlighting the gain in terms of the various physical problems that we can treat. Finally we will present the results of several OECD/NEA benchmarks using the Tripoli-4 Monte Carlo code, enhanced with this new criterion. (authors)
Alize 3 - first critical experiment for the franco-german high flux reactor - calculations
International Nuclear Information System (INIS)
Scharmer, K.
1969-01-01
The results of experiments in the light water cooled D 2 O reflected critical assembly ALIZE III have been compared to calculations. A diffusion model was used with 3 fast and epithermal groups and two overlapping thermal groups, which leads to good agreement of calculated and measured power maps, even in the case of strong variations of the neutron spectrum in the core. The difference of calculated and measured k eff was smaller than 0.5 per cent δk/k. Calculations of void and structure material coefficients of the reactivity of 'black' rods in the reflector, of spectrum variations (Cd-ratio, Pu-U-ratio) and to the delayed photoneutron fraction in the D 2 O reflector were made. Measurements of the influence of beam tubes on reactivity and flux distribution in the reflector were interpreted with regard to an optimum beam tube arrangement for the Franco- German High Flux Reactor. (author) [fr
Exploring the use of a deterministic adjoint flux calculation in criticality Monte Carlo simulations
International Nuclear Information System (INIS)
Jinaphanh, A.; Miss, J.; Richet, Y.; Martin, N.; Hebert, A.
2011-01-01
The paper presents a preliminary study on the use of a deterministic adjoint flux calculation to improve source convergence issues by reducing the number of iterations needed to reach the converged distribution in criticality Monte Carlo calculations. Slow source convergence in Monte Carlo eigenvalue calculations may lead to underestimate the effective multiplication factor or reaction rates. The convergence speed depends on the initial distribution and the dominance ratio. We propose using an adjoint flux estimation to modify the transition kernel according to the Importance Sampling technique. This adjoint flux is also used as the initial guess of the first generation distribution for the Monte Carlo simulation. Calculated Variance of a local estimator of current is being checked. (author)
Criticality Safety Code Validation with LWBR’s SB Cores
Energy Technology Data Exchange (ETDEWEB)
Putman, Valerie Lee
2003-01-01
The first set of critical experiments from the Shippingport Light Water Breeder Reactor Program included eight, simple geometry critical cores built with 233UO2-ZrO2, 235UO2-ZrO2, ThO2, and ThO2-233UO2 nuclear materials. These cores are evaluated, described, and modeled to provide benchmarks and validation information for INEEL criticality safety calculation methodology. In addition to consistency with INEEL methodology, benchmark development and nuclear data are consistent with International Criticality Safety Benchmark Evaluation Project methodology.Section 1 of this report introduces the experiments and the reason they are useful for validating some INEEL criticality safety calculations. Section 2 provides detailed experiment descriptions based on currently available experiment reports. Section 3 identifies criticality safety validation requirement sources and summarizes requirements that most affect this report. Section 4 identifies relevant hand calculation and computer code calculation methodologies used in the experiment evaluation, benchmark development, and validation calculations. Section 5 provides a detailed experiment evaluation. This section identifies resolutions for currently unavailable and discrepant information. Section 5 also reports calculated experiment uncertainty effects. Section 6 describes the developed benchmarks. Section 6 includes calculated sensitivities to various benchmark features and parameters. Section 7 summarizes validation results. Appendices describe various assumptions and their bases, list experimenter calculations results for items that were independently calculated for this validation work, report other information gathered and developed by SCIENTEC personnel while evaluating these same experiments, and list benchmark sample input and miscellaneous supplementary data.
International Nuclear Information System (INIS)
Berger, H.D.
1988-01-01
Calculational results relevant to the physics characteristics of PuO 2 /UO 2 -fueled light water high converter reactors (LWHCR) are discussed in the context of three particular codes and their associated data libraries, viz. WIMS/D, KARBUS (GRUCAH) and SPEKTRA. Analysis of various critical experiments - from the PROTEUS-LWHCR Phase I and Phase II programmes in particular - has demonstrated the importance of having a representative and comprehensive integral data base for identifying specific shortcomings in the calculational tools employed. (orig./DG)
An Analytical Solution for Lateral Buckling Critical Load Calculation of Leaning-Type Arch Bridge
Directory of Open Access Journals (Sweden)
Ai-rong Liu
2014-01-01
Full Text Available An analytical solution for lateral buckling critical load of leaning-type arch bridge was presented in this paper. New tangential and radial buckling models of the transverse brace between the main and stable arch ribs are established. Based on the Ritz method, the analytical solution for lateral buckling critical load of the leaning-type arch bridge with different central angles of main arch ribs and leaning arch ribs under different boundary conditions is derived for the first time. Comparison between the analytical results and the FEM calculated results shows that the analytical solution presented in this paper is sufficiently accurate. The parametric analysis results show that the lateral buckling critical load of the arch bridge with fixed boundary conditions is about 1.14 to 1.16 times as large as that of the arch bridge with hinged boundary condition. The lateral buckling critical load increases by approximately 31.5% to 41.2% when stable arch ribs are added, and the critical load increases as the inclined angle of stable arch rib increases. The differences in the center angles of the main arch rib and the stable arch rib have little effect on the lateral buckling critical load.
Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui
2004-01-01
A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks.
Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1
International Nuclear Information System (INIS)
Van Der Marck, S. C.
2012-01-01
Three nuclear data libraries have been tested extensively using criticality safety benchmark calculations. The three libraries are the new release of the US library ENDF/B-VII.1 (2011), the new release of the Japanese library JENDL-4.0 (2011), and the OECD/NEA library JEFF-3.1 (2006). All calculations were performed with the continuous-energy Monte Carlo code MCNP (version 4C3, as well as version 6-beta1). Around 2000 benchmark cases from the International Handbook of Criticality Safety Benchmark Experiments (ICSBEP) were used. The results were analyzed per ICSBEP category, and per element. Overall, the three libraries show similar performance on most criticality safety benchmarks. The largest differences are probably caused by elements such as Be, C, Fe, Zr, W. (authors)
Use of the Apollo-II multigroup transport code for criticality calculations
International Nuclear Information System (INIS)
Coste, M.; Mathonniere, G.; Sanchez, R.; Stankovski, Z.; Van der Gucht, C.; Zmijarevic, I.
1992-01-01
APPOLO-II is a new-generation multigroup transport code for assembly calculation. The code has been designed to be used as a tool for reactor design as well as for the analysis and interpretation of small nuclear facilities. As the first step in a criticality calculation, the collision probability module of the APPOLO-II code can be used to generate cell or assembly homogenized reaction-rate preserving cross sections that account for self-shielding effects as well as for the fine-energy within cell flux spectral variations. These cross section data can then be used either directly within the APPOLO-II code in a direct discrete ordinate multigroup transport calculation of a small nuclear facility or, more generally, be formatted by a post-processing module to be used by the multigroup diffusion code CRONOS-II or by the multigroup Monte Carlo code TRIMARAN
Benchmarking and Performance Management
Directory of Open Access Journals (Sweden)
Adrian TANTAU
2010-12-01
Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.
Energy Technology Data Exchange (ETDEWEB)
Akasaka, Ryo [Faculty of Humanities, Kyushu Lutheran College, 3-12-16 Kurokami, Kumamoto 860-8520 (Japan)
2009-01-15
The critical point of the water + ammonia mixture was calculated directly from the Helmholtz free energy formulation. The calculation was performed according to the critical point criteria expressed in terms of the derivatives of the Helmholtz free energy with respect to mole numbers. Smooth critical locus linking between the critical points of pure water and ammonia was obtained. The critical locus showed a good agreement with the most reliable experimental data. Simple correlations for the critical temperature, pressure, and molar volume for a given composition were developed. The information obtained in this study is helpful for design and simulation of the cycles using the water + ammonia mixture as working fluid. (author)
International Nuclear Information System (INIS)
Medrano Asensio, Gregorio.
1976-06-01
A detailed power distribution calculation in a large power reactor requires the solution of the multigroup 3D diffusion equations. Using the finite difference method, this computation is too expensive to be performed for design purposes. This work is devoted to the single channel continous synthesis method: the choice of the trial functions and the determination of the mixing functions are discussed in details; 2D and 3D results are presented. The method is applied to the calculation of the IAEA ''Benchmark'' reactor and the results obtained are compared with a finite element resolution and with published results [fr
International Nuclear Information System (INIS)
Manturov, G.; Semenov, M.; Seregin, A.; Lykova, L.
2004-01-01
The BFS-62 critical experiments are currently used as 'benchmark' for verification of IPPE codes and nuclear data, which have been used in the study of loading a significant amount of Pu in fast reactors. The BFS-62 experiments have been performed at BFS-2 critical facility of IPPE (Obninsk). The experimental program has been arranged in such a way that the effect of replacement of uranium dioxied blanket by the steel reflector as well as the effect of replacing UOX by MOX on the main characteristics of the reactor model was studied. Wide experimental program, including measurements of the criticality-keff, spectral indices, radial and axial fission rate distributions, control rod mock-up worth, sodium void reactivity effect SVRE and some other important nuclear physics parameters, was fulfilled in the core. Series of 4 BFS-62 critical assemblies have been designed for studying the changes in BN-600 reactor physics from existing state to hybrid core. All the assemblies are modeling the reactor state prior to refueling, i.e. with all control rod mock-ups withdrawn from the core. The following items are chosen for the analysis in this report: Description of the critical assembly BFS-62-3A as the 3rd assembly in a series of 4 BFS critical assemblies studying BN-600 reactor with MOX-UOX hybrid zone and steel reflector; Development of a 3D homogeneous calculation model for the BFS-62-3A critical experiment as the mock-up of BN-600 reactor with hybrid zone and steel reflector; Evaluation of measured nuclear physics parameters keff and SVRE (sodium void reactivity effect); Preparation of adjusted equivalent measured values for keff and SVRE. Main series of calculations are performed using 3D HEX-Z diffusion code TRIGEX in 26 groups, with the ABBN-93 cross-section set. In addition, precise calculations are made, in 299 groups and Ps-approximation in scattering, by Monte-Carlo code MMKKENO and discrete ordinate code TWODANT. All calculations are based on the common system
International Nuclear Information System (INIS)
Dos Santos, Adimir; Siqueira, Paulo de Tarso D.; Andrade e Silva, Graciete Simões; Grant, Carlos; Tarazaga, Ariel E.; Barberis, Claudia
2013-01-01
In year 2008 the Atomic Energy National Commission (CNEA) of Argentina, and the Brazilian Institute of Energetic and Nuclear Research (IPEN), under the frame of Nuclear Energy Argentine Brazilian Agreement (COBEN), among many others, included the project “Validation and Verification of Calculation Methods used for Research and Experimental Reactors . At this time, it was established that the validation was to be performed with models implemented in the deterministic codes HUEMUL and PUMA (cell and reactor codes) developed by CNEA and those ones implemented in MCNP by CNEA and IPEN. The necessary data for these validations would correspond to theoretical-experimental reference cases in the research reactor IPEN/MB-01 located in São Paulo, Brazil. The staff of the group Reactor and Nuclear Power Studies (SERC) of CNEA, from the argentine side, performed calculations with deterministic models (HUEMUL-PUMA) and probabilistic methods (MCNP) modeling a great number of physical situations of de reactor, which previously have been studied and modeled by members of the Center of Nuclear Engineering of the IPEN, whose results were extensively provided to CNEA. In this paper results of comparison of calculated and experimental results for critical configurations, temperature coefficients, kinetic parameters and fission rates evaluated with probabilistic models spatial distributions are shown. (author)
Static analysis of material testing reactor cores:critical core calculations
International Nuclear Information System (INIS)
Nawaz, A. A.; Khan, R. F. H.; Ahmad, N.
1999-01-01
A methodology has been described to study the effect of number of fuel plates per fuel element on critical cores of Material Testing Reactors (MTR). When the number of fuel plates are varied in a fuel element by keeping the fuel loading per fuel element constant, the fuel density in the fuel plates varies. Due to this variation, the water channel width needs to be recalculated. For a given number of fuel plates, water channel width was determined by optimizing k i nfinity using a transport theory lattice code WIMS-D/4. The dimensions of fuel element and control fuel element were determined using this optimized water channel width. For the calculated dimensions, the critical cores were determined for the given number of fuel plates per fuel element by using three dimensional diffusion theory code CITATION. The optimization of water channel width gives rise to a channel width of 2.1 mm when the number of fuel plates is 23 with 290 g ''2''3''5U fuel loading which is the same as in the case of Pakistan Reactor-1 (PARR-1). Although the decrease in number of fuel element results in an increase in optimal water channel width but the thickness of standard fuel element (SFE) and control fuel element (CFE) decreases and it gives rise to compact critical and equilibrium cores. The criticality studies of PARR-1 are in good agreement with the predictions
Calculation and analysis for a series of enriched uranium bare sphere critical assemblies
International Nuclear Information System (INIS)
Yang Shunhai
1994-12-01
The imported reactor fuel assembly MARIA program system is adapted to CYBER 825 computer in China Institute of Atomic Energy, and extensively used for a series of enriched uranium bare sphere critical assemblies. The MARIA auxiliary program of resonance modification MA is designed for taking account of the effects of resonance fission and absorption on calculated results. By which, the multigroup constants in the library attached to MARIA program are revised based on the U.S. Evaluated Nuclear Data File ENDF/B-IV, the related nuclear data files are replaced. And then, the reactor geometry buckling and multiplication factor are given in output tapes. The accuracy of calculated results is comparable with those of Monte Carlo and Sn method, and the agreement with experiment result is in 1%. (5 refs., 4 figs., 3 tabs.)
Calculational criticality analyses of 10- and 20-MW UF6 freezer/sublimer vessels
International Nuclear Information System (INIS)
Jordan, W.C.
1993-02-01
Calculational criticality analyses have been performed for 10- and 20-MW UF 6 freezer/sublimer vessels. The freezer/sublimers have been analyzed over a range of conditions that encompass normal operation and abnormal conditions. The effects of HF moderation of the UF 6 in each vessel have been considered for uranium enriched between 2 and 5 wt % 235 U. The results indicate that the nuclearly safe enrichments originally established for the operation of a 10-MW freezer/sublimer, based on a hydrogen-to-uranium moderation ratio of 0.33, are acceptable. If strict moderation control can be demonstrated for hydrogen-to-uranium moderation ratios that are less than 0.33, then the enrichment limits for the 10-MW freezer/sublimer may be increased slightly. The calculations performed also allow safe enrichment limits to be established for a 20-NM freezer/sublimer under moderation control
Status on benchmark testing of CENDL-3
Liu Ping
2002-01-01
CENDL-3, the newest version of China Evaluated Nuclear Data Library has been finished, and distributed for some benchmarks analysis recently. The processing was carried out using the NJOY nuclear data processing code system. The calculations and analysis of benchmarks were done with Monte Carlo code MCNP and reactor lattice code WIMSD5A. The calculated results were compared with the experimental results based on ENDF/B6. In most thermal and fast uranium criticality benchmarks, the calculated k sub e sub f sub f values with CENDL-3 were in good agreements with experimental results. In the plutonium fast cores, the k sub e sub f sub f values were improved significantly with CENDL-3. This is duo to reevaluation of the fission spectrum and elastic angular distributions of sup 2 sup 3 sup 9 Pu and sup 2 sup 4 sup 0 Pu. CENDL-3 underestimated the k sub e sub f sub f values compared with other evaluated data libraries for most spherical or cylindrical assemblies of plutonium or uranium with beryllium
Critical groups vs. representative person: dose calculations due to predicted releases from USEXA
International Nuclear Information System (INIS)
Ferreira, N.L.D.; Rochedo, E.R.R.; Mazzilli, B.P.
2013-01-01
The critical group cf Centro Experimental Aramar (CEA) site was previously defined based 00 the effluents releases to the environment resulting from the facilities already operational at CEA. In this work, effective doses are calculated to members of the critical group considering the predicted potential uranium releases from the Uranium Hexafluoride Production Plant (USEXA). Basically, this work studies the behavior of the resulting doses related to the type of habit data used in the analysis and two distinct situations are considered: (a) the utilization of average values obtained from official institutions (IBGE, IEA-SP, CNEN, IAEA) and from the literature; and (b) the utilization of the 95 tb percentile of the values derived from distributions fit to the obtained habit data. The first option corresponds to the way that data was used for the definition of the critical group of CEA done in former assessments, while the second one corresponds to the use of data in deterministic assessments, as recommended by ICRP to estimate doses to the so--called 'representative person' . (author)
Critical groups vs. representative person: dose calculations due to predicted releases from USEXA
Energy Technology Data Exchange (ETDEWEB)
Ferreira, N.L.D., E-mail: nelson.luiz@ctmsp.mar.mil.br [Centro Tecnologico da Marinha (CTM/SP), Sao Paulo, SP (Brazil); Rochedo, E.R.R., E-mail: elainerochedo@gmail.com [Instituto de Radiprotecao e Dosimetria (lRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Mazzilli, B.P., E-mail: mazzilli@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2013-07-01
The critical group cf Centro Experimental Aramar (CEA) site was previously defined based 00 the effluents releases to the environment resulting from the facilities already operational at CEA. In this work, effective doses are calculated to members of the critical group considering the predicted potential uranium releases from the Uranium Hexafluoride Production Plant (USEXA). Basically, this work studies the behavior of the resulting doses related to the type of habit data used in the analysis and two distinct situations are considered: (a) the utilization of average values obtained from official institutions (IBGE, IEA-SP, CNEN, IAEA) and from the literature; and (b) the utilization of the 95{sup tb} percentile of the values derived from distributions fit to the obtained habit data. The first option corresponds to the way that data was used for the definition of the critical group of CEA done in former assessments, while the second one corresponds to the use of data in deterministic assessments, as recommended by ICRP to estimate doses to the so--called 'representative person' . (author)
International Nuclear Information System (INIS)
Hoffman, E.L.; Ammerman, D.J.
1993-01-01
A series of tests investigating dynamic pulse buckling of a cylindrical shell under axial impact is compared to several finite element simulations of the event. The purpose of the study is to compare the performance of the various analysis codes and element types with respect to a problem which is applicable to radioactive material transport packages, and ultimately to develop a benchmark problem to qualify finite element analysis codes for the transport package design industry
2016-06-10
Under the Medicare Shared Savings Program (Shared Savings Program), providers of services and suppliers that participate in an Accountable Care Organization (ACO) continue to receive traditional Medicare fee-for-service (FFS) payments under Parts A and B, but the ACO may be eligible to receive a shared savings payment if it meets specified quality and savings requirements. This final rule addresses changes to the Shared Savings Program, including: Modifications to the program's benchmarking methodology, when resetting (rebasing) the ACO's benchmark for a second or subsequent agreement period, to encourage ACOs' continued investment in care coordination and quality improvement; an alternative participation option to encourage ACOs to enter performance-based risk arrangements earlier in their participation under the program; and policies for reopening of payment determinations to make corrections after financial calculations have been performed and ACO shared savings and shared losses for a performance year have been determined.
Random geometry model in criticality calculations of solutions containing Raschig rings
International Nuclear Information System (INIS)
Teng, S.P.; Lindstrom, D.G.
1979-01-01
The criticality constants of fissile solutions containing borated Raschig rings are evaluated using the Monte Carlo code KENO IV with various geometry models. In addition to those used by other investigators, a new geometry model, the random geometry model, is presented to simulate the system of randomly oriented Raschig rings in solution. A technique to obtain the material thickness distribution functions of solution and rings for use in the random geometry model is also presented. Comparison between the experimental data and the calculated results using Monte Carlo method with various geometry models indicates that the random geometry model is a reasonable alternative to models previously used in describing the system of Raschig-ring-filled solution. The random geometry model also provides a solution to the problem of describing an array containing Raschig-ring-filled tanks that is not available to techniques using other models
International Nuclear Information System (INIS)
Lima Barros, M. de.
1982-04-01
The multiplication factors of several systems with low enrichment, 3,5% and 3,2% in the isotope 235 U, aiming at the storage of fuel of ANGRA-I and ANGRA II, through the method of Monte Carlo, by the computacional code KENO-IV and the library of section of cross Hansen - Roach with 16 groups of energy. The method of Monte Carlo is specially suitable to the calculation of the factor of multiplication, because it is one of the most acurate models of solution and allows the description of complex tridimensional systems. Various tests of sensibility of this method have been done in order to present the most convenient way of working with KENO-IV code. The safety on criticality of stores of fissile material of the 'Fabrica de Elementos Combustiveis ', has been analyzed through the method of Monte Carlo. (Author) [pt
Benchmarking & european sustainable transport policies
DEFF Research Database (Denmark)
Gudmundsson, Henrik
2003-01-01
way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...
A Heterogeneous Medium Analytical Benchmark
International Nuclear Information System (INIS)
Ganapol, B.D.
1999-01-01
A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results
Grafen, M.; Delbeck, S.; Busch, H.; Heise, H. M.; Ostendorf, A.
2018-02-01
Mid-infrared spectroscopy hyphenated with micro-dialysis is an excellent method for monitoring metabolic blood parameters as it enables the concurrent, reagent-free and precise measurement of multiple clinically relevant substances such as glucose, lactate and urea in micro-dialysates of blood or interstitial fluid. For a marketable implementation, quantum cascade lasers (QCL) seem to represent a favourable technology due to their high degree of miniaturization and potentially low production costs. In this work, an external cavity (EC) - QCL-based spectrometer and two Fourier-transform infrared (FTIR) spectrometers were benchmarked with regard to the precision, accuracy and long-term stability needed for the monitoring of critically ill patients. For the tests, ternary aqueous solutions of glucose, lactate and mannitol (the latter for dialysis recovery determination) were measured in custom-made flow-through transmission cells of different pathlengths and analyzed by Partial Least Squares calibration models. It was revealed, that the wavenumber tuning speed of the QCL had a severe impact on the EC-mirror trajectory due to matching the digital-analog-converter step frequency with the mechanical resonance frequency of the mirror actuation. By selecting an appropriate tuning speed, the mirror oscillations acted as a hardware smoothing filter for the significant intensity variations caused by mode hopping. Besides the tuning speed, the effects of averaging over multiple spectra and software smoothing parameters (Savitzky-Golay-filters and FT-smoothing) were investigated. The final settings led to a performance of the QCL-system, which was comparable with a research FTIR-spectrometer and even surpassed the performance of a small FTIR-mini-spectrometer.
A case study and critical assessment in calculating power usage effectiveness for a data centre
International Nuclear Information System (INIS)
Brady, Gemma A.; Kapur, Nikil; Summers, Jonathan L.; Thompson, Harvey M.
2013-01-01
Highlights: • A case study PUE calculation is carried out on a data centre by using open source specifications. • The PUE metric does not drive improvements in the efficiencies of IT processes. • The PUE does not fairly represent energy use; an increase in IT load can lead to a decrease in the PUE. • Once a low PUE is achieved, power supply efficiency and IT load have the greatest impact on its value. - Abstract: Metrics commonly used to assess the energy efficiency of data centres are analysed through performing and critiquing a case study calculation of energy efficiency. Specifically, the metric Power Usage Effectiveness (PUE), which has become a de facto standard within the data centre industry, will be assessed. This is achieved by using open source specifications for a data centre in Prineville, Oregon, USA provided by the Open Compute Project launched by the social networking company Facebook. The usefulness of the PUE metric to the IT industry is critically assessed and it is found that whilst it is important for encouraging lower energy consumption in data centres, it does not represent an unambiguous measure of energy efficiency
Alloui, Mebarka; Belaidi, Salah; Othmani, Hasna; Jaidane, Nejm-Eddine; Hochlaf, Majdi
2018-03-01
We performed benchmark studies on the molecular geometry, electron properties and vibrational analysis of imidazole using semi-empirical, density functional theory and post Hartree-Fock methods. These studies validated the use of AM1 for the treatment of larger systems. Then, we treated the structural, physical and chemical relationships for a series of imidazole derivatives acting as angiotensin II AT1 receptor blockers using AM1. QSAR studies were done for these imidazole derivatives using a combination of various physicochemical descriptors. A multiple linear regression procedure was used to design the relationships between molecular descriptor and the activity of imidazole derivatives. Results validate the derived QSAR model.
Benchmarking of nuclear economics tools
International Nuclear Information System (INIS)
Moore, Megan; Korinny, Andriy; Shropshire, David; Sadhankar, Ramesh
2017-01-01
Highlights: • INPRO and GIF economic tools exhibited good alignment in total capital cost estimation. • Subtle discrepancies in the cost result from differences in financing and the fuel cycle assumptions. • A common set of assumptions was found to reduce the discrepancies to 1% or less. • Opportunities for harmonisation of economic tools exists. - Abstract: Benchmarking of the economics methodologies developed by the Generation IV International Forum (GIF) and the International Atomic Energy Agency’s International Project on Innovative Nuclear Reactors and Fuel Cycles (INPRO), was performed for three Generation IV nuclear energy systems. The Economic Modeling Working Group of GIF developed an Excel based spreadsheet package, G4ECONS (Generation 4 Excel-based Calculation Of Nuclear Systems), to calculate the total capital investment cost (TCIC) and the levelised unit energy cost (LUEC). G4ECONS is sufficiently generic in the sense that it can accept the types of projected input, performance and cost data that are expected to become available for Generation IV systems through various development phases and that it can model both open and closed fuel cycles. The Nuclear Energy System Assessment (NESA) Economic Support Tool (NEST) was developed to enable an economic analysis using the INPRO methodology to easily calculate outputs including the TCIC, LUEC and other financial figures of merit including internal rate of return, return of investment and net present value. NEST is also Excel based and can be used to evaluate nuclear reactor systems using the open fuel cycle, MOX (mixed oxide) fuel recycling and closed cycles. A Super Critical Water-cooled Reactor system with an open fuel cycle and two Fast Reactor systems, one with a break-even fuel cycle and another with a burner fuel cycle, were selected for the benchmarking exercise. Published data on capital and operating costs were used for economics analyses using G4ECONS and NEST tools. Both G4ECONS and
International Nuclear Information System (INIS)
Kim, In Young; Choi, Heui Joo; Cho, Dong Geun
2013-01-01
The primary function of any repository is to prevent spreading of dangerous materials into surrounding environment. In the case of high-level radioactive waste repository, radioactive material must be isolated and retarded during sufficient decay time to minimize radiation hazard to human and surrounding environment. Sub-criticality of disposal canister and whole disposal system is minimum requisite to prevent multiplication of radiation hazard. In this study, criticality of disposal canister and DBD system for trans-metal waste is calculated to check compliance of sub-criticality. Preliminary calculation on criticality of conceptual deep borehole disposal system and its canister for trans-metal waste during operational phase is conducted in this study. Calculated criticalities at every temperature are under sub-criticalities and criticalities of canister and DBD system considering temperature are expected to become 0.34932 and 0.37618 approximately. There are obvious limitations in this study. To obtain reliable data, exact elementary composition of each component, system component temperature must be specified and applied, and then proper cross section according to each component temperature must be adopted. However, many assumptions, for example simplified elementary concentration and isothermal component temperature, are adopted in this study. Improvement of these data must be conducted in the future work to progress reliability. And, post closure criticality analyses including geo, thermal, hydro, mechanical, chemical mechanism, especially fissile material re-deposition by precipitation and sorption, must be considered to ascertain criticality safety of DBD system as a future work
DEFF Research Database (Denmark)
Cismondi, Martin; Michelsen, Michael Locht
2007-01-01
A general strategy for global phase equilibrium calculations (GPEC) in binary mixtures is presented in this work along with specific methods for calculation of the different parts involved. A Newton procedure using composition, temperature and Volume as independent variables is used for calculati...
Automatic treatment of the variance estimation bias in TRIPOLI-4 criticality calculations
International Nuclear Information System (INIS)
Dumonteil, E.; Malvagi, F.
2012-01-01
The central limit (CLT) theorem States conditions under which the mean of a sufficiently large number of independent random variables, each with finite mean and variance, will be approximately normally distributed. The use of Monte Carlo transport codes, such as Tripoli4, relies on those conditions. While these are verified in protection applications (the cycles provide independent measurements of fluxes and related quantities), the hypothesis of independent estimates/cycles is broken in criticality mode. Indeed the power iteration technique used in this mode couples a generation to its progeny. Often, after what is called 'source convergence' this coupling almost disappears (the solution is closed to equilibrium) but for loosely coupled systems, such as for PWR or large nuclear cores, the equilibrium is never found, or at least may take time to reach, and the variance estimation such as allowed by the CLT is under-evaluated. In this paper we first propose, by the mean of two different methods, to evaluate the typical correlation length, as measured in cycles number, and then use this information to diagnose correlation problems and to provide an improved variance estimation. Those two methods are based on Fourier spectral decomposition and on the lag k autocorrelation calculation. A theoretical modeling of the autocorrelation function, based on Gauss-Markov stochastic processes, will also be presented. Tests will be performed with Tripoli4 on a PWR pin cell. (authors)
A power spectrum approach to tally convergence in Monte Carlo criticality calculation
International Nuclear Information System (INIS)
Ueki, Taro
2017-01-01
In Monte Carlo criticality calculation, confidence interval estimation is based on the central limit theorem (CLT) for a series of tallies from generations in equilibrium. A fundamental assertion resulting from CLT is the convergence in distribution (CID) of the interpolated standardized time series (ISTS) of tallies. In this work, the spectral analysis of ISTS has been conducted in order to assess the convergence of tallies in terms of CID. Numerical results obtained indicate that the power spectrum of ISTS is equal to the theoretically predicted power spectrum of Brownian motion for tallies of effective neutron multiplication factor; on the other hand, the power spectrum of ISTS of a strongly correlated series of tallies from local powers fluctuates wildly while maintaining the spectral form of fractional Brownian motion. The latter result is the evidence of a case where a series of tallies are away from CID, while the spectral form supports normality assumption on the sample mean. It is also demonstrated that one can make the unbiased estimation of the standard deviation of sample mean well before CID occurs. (author)
Energy Technology Data Exchange (ETDEWEB)
Chrysos, Michael, E-mail: michel.chrysos@univ-angers.fr; Rachet, Florent [LUNAM Université, Université d’Angers, CNRS UMR 6200, Laboratoire MOLTECH-Anjou, 2 Bd Lavoisier, 49045 Angers (France); Dixneuf, Sophie [Centre du Commissariat à l’Énergie Atomique de Grenoble, Laboratoire CEA-bioMérieux, Bât 40.20, 17 rue des Martyrs, 38054 Grenoble (France)
2015-07-14
This is the long-overdue answer to the discrepancies observed between theory and experiment in Ar{sub 2} regarding both the isotropic Raman spectrum and the second refractivity virial coefficient, B{sub R} [Gaye et al., Phys. Rev. A 55, 3484 (1997)]. At the origin of this progress is the advent (posterior to 1997) of advanced computational methods for weakly interconnected neutral species at close separations. Here, we report agreement between the previously taken Raman measurements and quantum lineshapes now computed with the employ of large-scale CCSD or smartly constructed MP2 induced-polarizability data. By using these measurements as a benchmark tool, we assess the degree of performance of various other ab initio computed data for the mean polarizability α, and we show that an excellent agreement with the most recently measured value of B{sub R} is reached. We propose an even more refined model for α, which is solution of the inverse-scattering problem and whose lineshape matches exactly the measured spectrum over the entire frequency-shift range probed.
Directory of Open Access Journals (Sweden)
Avramović Ivana
2007-01-01
Full Text Available The H5B is a concept of an accelerator-driven sub-critical research facility (ADSRF being developed over the last couple of years at the Vinča Institute of Nuclear Sciences, Belgrade, Serbia. Using well-known computer codes, the MCNPX and MCNP, this paper deals with the results of a tar get study and neutron flux calculations in the sub-critical core. The neutron source is generated by an interaction of a proton or deuteron beam with the target placed inside the sub-critical core. The results of the total neutron flux density escaping the target and calculations of neutron yields for different target materials are also given here. Neutrons escaping the target volume with the group spectra (first step are used to specify a neutron source for further numerical simulations of the neutron flux density in the sub-critical core (second step. The results of the calculations of the neutron effective multiplication factor keff and neutron generation time L for the ADSRF model have also been presented. Neutron spectra calculations for an ADSRF with an uranium tar get (highest values of the neutron yield for the selected sub-critical core cells for both beams have also been presented in this paper.
International Nuclear Information System (INIS)
Kawai, Masayoshi
1984-01-01
Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)
Radiation Detection Computational Benchmark Scenarios
Energy Technology Data Exchange (ETDEWEB)
Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.
2013-09-24
Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for
Modeling of the ORNL PCA Benchmark Using SCALE6.0 Hybrid Deterministic-Stochastic Methodology
Directory of Open Access Journals (Sweden)
Mario Matijević
2013-01-01
Full Text Available Revised guidelines with the support of computational benchmarks are needed for the regulation of the allowed neutron irradiation to reactor structures during power plant lifetime. Currently, US NRC Regulatory Guide 1.190 is the effective guideline for reactor dosimetry calculations. A well known international shielding database SINBAD contains large selection of models for benchmarking neutron transport methods. In this paper a PCA benchmark has been chosen from SINBAD for qualification of our methodology for pressure vessel neutron fluence calculations, as required by the Regulatory Guide 1.190. The SCALE6.0 code package, developed at Oak Ridge National Laboratory, was used for modeling of the PCA benchmark. The CSAS6 criticality sequence of the SCALE6.0 code package, which includes KENO-VI Monte Carlo code, as well as MAVRIC/Monaco hybrid shielding sequence, was utilized for calculation of equivalent fission fluxes. The shielding analysis was performed using multigroup shielding library v7_200n47g derived from general purpose ENDF/B-VII.0 library. As a source of response functions for reaction rate calculations with MAVRIC we used international reactor dosimetry libraries (IRDF-2002 and IRDF-90.v2 and appropriate cross-sections from transport library v7_200n47g. The comparison of calculational results and benchmark data showed a good agreement of the calculated and measured equivalent fission fluxes.
Preliminary Benchmark Evaluation of Japan’s High Temperature Engineering Test Reactor
Energy Technology Data Exchange (ETDEWEB)
John Darrell Bess
2009-05-01
A benchmark model of the initial fully-loaded start-up core critical of Japan’s High Temperature Engineering Test Reactor (HTTR) was developed to provide data in support of ongoing validation efforts of the Very High Temperature Reactor Program using publicly available resources. The HTTR is a 30 MWt test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. The benchmark was modeled using MCNP5 with various neutron cross-section libraries. An uncertainty evaluation was performed by perturbing the benchmark model and comparing the resultant eigenvalues. The calculated eigenvalues are approximately 2-3% greater than expected with an uncertainty of ±0.70%. The primary sources of uncertainty are the impurities in the core and reflector graphite. The release of additional HTTR data could effectively reduce the benchmark model uncertainties and bias. Sensitivity of the results to the graphite impurity content might imply that further evaluation of the graphite content could significantly improve calculated results. Proper characterization of graphite for future Next Generation Nuclear Power reactor designs will improve computational modeling capabilities. Current benchmarking activities include evaluation of the annular HTTR cores and assessment of the remaining start-up core physics experiments, including reactivity effects, reactivity coefficient, and reaction-rate distribution measurements. Long term benchmarking goals might include analyses of the hot zero-power critical, rise-to-power tests, and other irradiation, safety, and technical evaluations performed with the HTTR.
ten Haken, Bernard; Godeke, A.; ten Kate, Herman H.J.
1994-01-01
A simple model is presented that can describe the electro-mechanical state of a multifilamentary wire. An elastic cylinder model is used to derive the strain state analytically. Axial and transverse forces came a position dependent critical current density in the wire. The integral critical current
International Nuclear Information System (INIS)
Bedrosian, B.; Barbela, M.; Drenick, R.F.; Tsirk, A.
1980-10-01
The critical excitation method provides a new, alternative approach to methods presently used for seismic analysis of nuclear power plant structures. The critical excitation method offers the advantages that: (1) it side-steps the assumptions regarding the probability distribution of ground motions, and (2) it does not require an artificial, and to some extent arbitrarily generated, time history of ground motion, both features to which structural integrity analyses are sensitive. Potential utility of the critical excitation method is studied from the user's viewpoint. The method is reviewed and compared with the response spectrum method used in current practice, utilizing the reactor buildings of a PWR and a BWR plant in case studies. Two types of constraints on critical excitation were considered in the study. In one case, only an intensity limit was used. In the other case, imposition of an intensity limit together with limits on the maximum acceleration and/or velocity for the critical excitation is considered
FRIB driver linac vacuum model and benchmarks
Durickovic, Bojan; Kersevan, Roberto; Machicoane, Guillaume
2014-01-01
The Facility for Rare Isotope Beams (FRIB) is a superconducting heavy-ion linear accelerator that is to produce rare isotopes far from stability for low energy nuclear science. In order to achieve this, its driver linac needs to achieve a very high beam current (up to 400 kW beam power), and this requirement makes vacuum levels of critical importance. Vacuum calculations have been carried out to verify that the vacuum system design meets the requirements. The modeling procedure was benchmarked by comparing models of an existing facility against measurements. In this paper, we present an overview of the methods used for FRIB vacuum calculations and simulation results for some interesting sections of the accelerator. (C) 2013 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Moeller, M.P.; Scherpelz, R.I.; Desrosiers, A.E.
1982-01-01
This work analyzes the sensitivity of calculated doses to critical assumptions for offsite consequences following a PWR-2 accident at a nuclear power reactor. The calculations include three radiation dose pathways: internal dose resulting from inhalation, external doses from exposure to the plume, and external doses from exposure to contaminated ground. The critical parameters are the time period of integration for internal dose commitment and the duration of residence on contaminated ground. The data indicate the calculated offsite whole body dose will vary by as much as 600% depending upon the parameters assumed. When offsite radiation doses determine the size of emergency planning zones, this uncertainty has significant effect upon the resources allocated to emergency preparedness
Static benchmarking of the NESTLE advanced nodal code
International Nuclear Information System (INIS)
Mosteller, R.D.
1997-01-01
Results from the NESTLE advanced nodal code are presented for multidimensional numerical benchmarks representing four different types of reactors, and predictions from NESTLE are compared with measured data from pressurized water reactors (PWRs). The numerical benchmarks include cases representative of PWRs, boiling water reactors (BWRs), CANDU heavy water reactors (HWRs), and high-temperature gas-cooled reactors (HTGRs). The measured PWR data include critical soluble boron concentrations and isothermal temperature coefficients of reactivity. The results demonstrate that NESTLE correctly solves the multigroup diffusion equations for both Cartesian and hexagonal geometries, that it reliably calculates k eff and reactivity coefficients for PWRs, and that--subsequent to the incorporation of additional thermal-hydraulic models--it will be able to perform accurate calculations for the corresponding parameters in BWRs, HWRs, and HTGRs as well
Czech Academy of Sciences Publication Activity Database
Mládek, Arnošt; Krepl, Miroslav; Svozil, Daniel; Čech, P.; Otyepka, M.; Banáš, P.; Zgarbová, M.; Jurečka, P.; Šponer, Jiří
2013-01-01
Roč. 15, č. 19 (2013), s. 7295-7310 ISSN 1463-9076 R&D Projects: GA ČR(CZ) GAP208/11/1822 Grant - others:GA MŠk(CZ) ED1.1.00/02.0068 Program:ED Institutional research plan: CEZ:AV0Z50040702 Institutional support: RVO:68081707 Keywords : GAUSSIAN-BASIS SETS * GENERALIZED GRADIENT APPROXIMATION * CORRELATED MOLECULAR CALCULATIONS Subject RIV: BO - Biophysics Impact factor: 4.198, year: 2013
ICSBEP Benchmarks For Nuclear Data Applications
Briggs, J. Blair
2005-05-01
The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.
Analysis of VENUS-3 benchmark experiment
International Nuclear Information System (INIS)
Kodeli, I.; Sartori, E.
1998-01-01
The paper presents the revision and the analysis of VENUS-3 benchmark experiment performed at CEN/SCK, Mol (Belgium). This benchmark was found to be particularly suitable for validation of current calculation tools like 3-D neutron transport codes, and in particular of the 3D sensitivity and uncertainty analysis code developed within the EFF project. The compilation of the integral experiment was integrated into the SINBAD electronic data base for storing and retrieving information about the shielding experiments for nuclear systems. SINBAD now includes 33 reviewed benchmark descriptions and several compilations waiting for the review, among them many benchmarks relevant for pressure vessel dosimetry system validation.(author)
Critical experiments on STACY homogeneous core containing 10% enriched uranyl nitrate solution
International Nuclear Information System (INIS)
Miyoshi, Yoshinori; Yamamoto, Toshihiro; Tonoike, Kotaro; Yamane, Yuichi; Watanabe, Shouichi
2003-01-01
In order to investigate criticality properties of low enriched uranyl nitrate solution treated in the reprocessing facility for LWR fuel cycle, systematic and high precision critical experiments have been performed at the Static Experiment Critical Facility, STACY since 1995. Criticality benchmark data on 10% enriched uranyl nitrate solution for single core and multiple core systems have been accumulated using cylindrical and slab type core tanks. This paper overviews mains data and related criticality calculation results using standard criticality safety calculation code system. (author)
Measuring Distribution Performance? Benchmarking Warrants Your Attention
Energy Technology Data Exchange (ETDEWEB)
Ericson, Sean J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Alvarez, Paul [The Wired Group
2018-04-13
Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.
International Nuclear Information System (INIS)
Rubin, I.E.; Dneprovskaya, N.M.
2005-01-01
A technique for calculation of reactor lattices by means of the transmission probabilities with taking into account the scattering anisotropy is generalized for the multigroup case. The errors of the calculated multiplication coefficients and energy release distributions do noe exceed practically the errors, of these values, obtained by the Monte Carlo method. The proposed method is most effective when determining the small difference effects [ru
A new approach for the calculation of critical organ dose in nuclear medicine applications
International Nuclear Information System (INIS)
Yasar, Dogan; Tugrul, A. Beril
2005-01-01
The geometrical factor that is calculated to keep in mind the radiation source and detector position is rather frequently used in radiation measuring and calculating methods. In this study, using the geometrical factor is intended to suggest a new model to measure the absorbed dose in nuclear medicine applications. Therefore, the source and target organ's geometries are accepted to be disc and parallel to each other. In this manner, a mathematical model for the geometry of these discs is proposed and a disc-disc geometry factor is calculated. Theoretical calculations have been carried out with the MIRD (medical internal absorbed dose) method, which is widely used to the absorbed dose calculations in nuclear medicine. Absorbed radiation dose is separately calculated for a target organ, which is the testis, with disc-disc geometry factor model and MIRD model. Both the results are compared and the results of disc-disc geometry factor model are shown to be harmonious and acceptable with the results of MIRD model
DEFF Research Database (Denmark)
Tabatabaeipour, Mojtaba; Blanke, Mogens
2014-01-01
In safety critical systems, the control system is composed of a core control system with a fault detection and isolation scheme together with a repair or a recovery strategy. The time that it takes to detect, isolate, and recover from the fault (fault recovery time) is a critical factor in safety...... of a system. It must be guaranteed that the trajectory of a system subject to fault remains in the region of attraction (ROA) of the post-fault system during this time. This paper proposes a new algorithm to compute the critical fault recovery time for nonlinear systems with polynomial vector elds using sum...... of squares programming. The proposed algorithm is based on computation of ROA of the recovered system and nite-time stability of the faulty system....
Benchmark experiments to test plutonium and stainless steel cross sections
International Nuclear Information System (INIS)
Jenquin, U.P.; Bierman, S.R.
1978-06-01
Neutronics calculations of physical systems containing fissionable material in various configurations are often necessary to assess criticality safety and economic parameters. Criticality safety of the material must be assured for all configurations in the fuel fabrication, spent fuel reprocessing, and transportation processes. Integral criticality experiments are utilized to evaluate neutron cross sections, test theoretical methods, and validate calculational procedures. The Nuclear Regulatory Commission (NRC) commissioned Battelle, Pacific Northwest Laboratory (PNL) to ascertain the accuracy of the neutron cross sections for the isotopes of plutonium and the constituents of stainless steel and determine if improvements can be made in application to criticality safety analysis. NRC's particular area of interest is in the transportation of light--water reactor spent fuel assemblies. The project was divided into two tasks. The first task was to define a set of integral experimental measurements (benchmarks). The second task is to use these benchmarks in neutronics calculations such that the accuracy of ENDF/B-IV plutonium and stainless steel cross sections can be assessed. The results of the second task should help to identify deficiencies in the neutron cross sections. The results of the first task are given
Energy Technology Data Exchange (ETDEWEB)
L.M. Montierth
2000-09-15
The objective of this calculation is to characterize the nuclear criticality safety concerns associated with the codisposal of the U.S. Department of Energy's (DOE) Shippingport Light Water Breeder Reactor (SP LWBR) Spent Nuclear Fuel (SNF) in a 5-Defense High-Level Waste (5-DHLW) Waste Package (WP), which is to be placed in a Monitored Geologic Repository (MGR). The scope of this calculation is limited to the determination of the effective neutron multiplication factor (K{sub eff}) for intact- and degraded-mode internal configurations of the codisposal WP containing Shippingport LWBR seed-type assemblies. The results of this calculation will be used to evaluate criticality issues and support the analysis that is planed to be performed to demonstrate the viability of the codisposal concept for the MGR. This calculation is associated with the waste package design and was performed in accordance with the DOE SNF Analysis Plan for FY 2000 (See Ref. 22). The document has been prepared in accordance with the Administrative Procedure AP-3.12Q, Calculations (Ref. 23).
Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity
Harbin Li; Steven G. McNulty
2007-01-01
Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL...
Benchmarking of human resources management
Directory of Open Access Journals (Sweden)
David M. Akinnusi
2008-11-01
Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.
Shen, Jun; Piecuch, Piotr
2012-04-14
We have recently suggested the CC(P;Q) methodology that can correct energies obtained in the active-space coupled-cluster (CC) or equation-of-motion (EOM) CC calculations, which recover much of the nondynamical and some dynamical electron correlation effects, for the higher-order, mostly dynamical, correlations missing in the active-space CC/EOMCC considerations. It is shown that one can greatly improve the description of biradical transition states, both in terms of the resulting energy barriers and total energies, by combining the CC approach with singles, doubles, and active-space triples, termed CCSDt, with the CC(P;Q)-style correction due to missing triple excitations defining the CC(t;3) approximation.
Criticality calculations of the HTR-10 pebble-bed reactor with SCALE6/CSAS6 and MCNP5
International Nuclear Information System (INIS)
Wang, Meng-Jen; Sheu, Rong-Jiun; Peir, Jinn-Jer; Liang, Jenq-Horng
2014-01-01
Highlights: • Comparisons of the HTR-10 criticality calculations with SCALE6/CSAS6 and MCNP5 were performed. • The DOUBLEHET unit-cell treatment provides the best k eff estimation among PBR criticality calculations using SCALE6. • The continuous-energy SCALE6 calculations present a non-negligible discrepancy with MCNP5 in three PBR cases. - Abstract: HTR-10 is a 10 MWt prototype pebble-bed reactor (PBR) that presents a doubly heterogeneous geometry for neutronics calculations. An appropriate unit-cell treatment for the associated fuel elements is vital for creating problem-dependent multigroup cross sections. Considering four unit-cell options for resonance self-shielding correction in SCALE6, a series of HTR-10 core models were established using the CSAS6 sequence to systematically investigate how they affected the computational accuracy and efficiency of PBR criticality calculations. Three core configurations, which ranged from simplified infinite lattices to a detailed geometry, were examined. Based on the same ENDF/B-VII.0 cross-section library, multigroup results were evaluated by comparing with continuous-energy SCALE6/CSAS6 and MCNP5 calculations. The comparison indicated that the INFHOMMEDIUM results overestimated the effective multiplication factor (k eff ) by about 2800 pcm, whereas the LATTICECELL and MULTIREGION treatments overestimated k eff values with similar biases at approximately 470–680 pcm. The DOUBLEHET results attained further improvement, reducing the k eff overestimation to approximately 280 pcm. The comparison yielded two unexpected problems from using SCALE6/CSAS6 in HTR-10 criticality calculations. In particular, the continuous-energy CSAS6 calculations in this study present a non-negligible discrepancy with MCNP5, potentially causing a k eff value overestimate of approximately 680 pcm. Notably, using a cell-weighted mixture instead of an explicit model of individual TRISO particles in the pebble fuel zone does not shorten the
Comparison of statistical evaluation of criticality calculations for reactors VENUS-F and ALFRED
Directory of Open Access Journals (Sweden)
Janczyszyn Jerzy
2017-01-01
Full Text Available Limitations of correct evaluation of keff in Monte Carlo calculations, claimed in literature, apart from the nuclear data uncertainty, need to be addressed more thoroughly. Respective doubts concern: the proper number of discarded initial cycles, the sufficient number of neutrons in a cycle and the recognition and dealing with the keff bias. Calculations were performed to provide more information on these points with the use of the MCB code, solely for fast cores. We present applied methods and results, such as: calculation results for stability of variance, relation between standard deviation reported by MCNP and this from the dispersion of multiple independent keff values, second order standard deviations obtained from different numbers of grouped results. All obtained results for numbers of discarded initial cycles from 0 to 3000 were analysed leading for interesting conclusions.
Directory of Open Access Journals (Sweden)
Mingyang Wang
2016-01-01
Full Text Available For underground explosions, a thin to medium thickness layer near the cavity of an explosion can be considered a theoretical shell structure. Detonation products transmit the effective energy of explosives to this shell which can expand thus leading to irreversible deformation of the surrounding medium. Based on mass conservation, incompressible conditions, and boundary conditions, the possible kinematic velocity fields in the plastic zone are established. Based on limit equilibrium theory, this work built equations of material resistance corresponding to different possible kinematic velocity fields. Combined with initial conditions and boundary conditions, equations of motion and material resistance are solved, respectively. It is found that critical depth of burial is positively related to a dimensionless impact factor, which reflects the characteristics of the explosives and the surrounding medium. Finally, an example is given, which suggests that this method is capable of calculating the critical depth of burial and the calculated results are consistent with empirical results.
International Nuclear Information System (INIS)
2013-04-01
The IAEA has facilitated an extensive programme that addresses the technical development of advanced gas cooled reactor technology. Included in this programme is the coordinated research project (CRP) on Evaluation of High Temperature Gas Cooled Reactor (HTGR) Performance, which is the focus of this TECDOC. This CRP was established to foster the sharing of research and associated technical information among participating Member States in the ongoing development of the HTGR as a future source of nuclear energy. Within it, computer codes and models were verified through actual test results from operating reactor facilities. The work carried out in the CRP involved both computational and experimental analysis at various facilities in IAEA Member States with a view to verifying computer codes and methods in particular, and to evaluating the performance of HTGRs in general. The IAEA is grateful to China, the Russian Federation and South Africa for providing their facilities and benchmark programmes in support of this CRP.
The comparison of MCNP perturbation technique with MCNP difference method in critical calculation
International Nuclear Information System (INIS)
Liu Bin; Lv Xuefeng; Zhao Wei; Wang Kai; Tu Jing; Ouyang Xiaoping
2010-01-01
For a nuclear fission system, we calculated Δk eff , which arise from system material composition changes, by two different approaches, the MCNP perturbation technique and the MCNP difference method. For every material composition change, we made four different runs, each run with different cycles or each cycle generating different neutrons, then we compared the two Δk eff that are obtained by two different approaches. As a material composition change in any particular cell of the nuclear fission system is small compared to the material compositions in the whole nuclear fission system, in other words, this composition change can be treated as a small perturbation, the Δk eff results obtained from the MCNP perturbation technique are much quicker, much more efficient and reliable than the results from the MCNP difference method. When a material composition change in any particular cell of the nuclear fission system is significant compared to the material compositions in the whole nuclear fission system, both the MCNP perturbation technique and the MCNP difference method can give satisfactory results. But for the run with the same cycles and each cycle generating the same neutrons, the results obtained from the MCNP perturbation technique are systemically less than the results obtained from the MCNP difference method. To further confirm our calculation results from the MCNP4C, we run the exact same MCNP4C input file in MCNP5, the calculation results from MCNP5 are the same as the calculation results from MCNP4C. We need caution when using the MCNP perturbation technique to calculate the Δk eff as the material composition change is large compared to the material compositions in the whole nuclear fission system, even though the material composition changes of any particular cell of the fission system still meet the criteria of MCNP perturbation technique.
U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...
Reactor fuel depletion benchmark of TINDER
International Nuclear Information System (INIS)
Martin, W.J.; Oliveira, C.R.E. de; Hecht, A.A.
2014-01-01
Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO 2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work
Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity
International Nuclear Information System (INIS)
Li Harbin; McNulty, Steven G.
2007-01-01
Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL estimates to the national scale could be developed. Specifically, we wanted to quantify CAL uncertainty under natural variability in 17 model parameters, and determine their relative contributions in predicting CAL. Results indicated that uncertainty in CAL came primarily from components of base cation weathering (BC w ; 49%) and acid neutralizing capacity (46%), whereas the most critical parameters were BC w base rate (62%), soil depth (20%), and soil temperature (11%). Thus, improvements in estimates of these factors are crucial to reducing uncertainty and successfully scaling up SMBE for national assessments of CAL. - A comprehensive uncertainty analysis, with advanced techniques and full list and full value ranges of all individual parameters, was used to examine a simple mass balance model and address questions of error partition and uncertainty reduction in critical acid load estimates that were not fully answered by previous studies
DEFF Research Database (Denmark)
Borri, Paola; Scaffetti, Stefano; Mørk, Jesper
1999-01-01
The nonlinear gain response of InGaAsP bulk optical amplifiers under ultrafast optical excitation at 1.53 ìm investigated. In particular, the dependence of the gain saturation energy on the pulse duration is measured in the range of pulse durations from 150 fs to 11 ps, for different bias current...... and lengths of the amplifier. By comparison with a theoretical model, a critical pulsewidth is inferred below which nonlinear carrier dynamics like carrier heating and spectral hole burning dominate the gain saturation....
Critical analysis of prostate-specific antigen doubling time calculation methodology.
Svatek, Robert S; Shulman, Michael; Choudhary, Pankaj K; Benaim, Elie
2006-03-01
Prostate-specific antigen (PSA) doubling time (PSADT) has emerged as an important surrogate marker of disease progression and survival in men with prostate carcinoma. The literature is replete with different methods for calculating PSADT. The objective of the current study was to identify the method that best described PSA growth over time and predicted disease-specific survival in men with androgen-independent prostate carcinoma. PSADT was calculated for 122 patients with androgen-independent prostate carcinoma using 2 commonly used methods: best-line fit (BLF) and first and last observations (FLO). Then, PSADT was calculated by using both a random coefficient linear (RCL) model and a random coefficient quadratic (RCQ) model. Statistical analysis was used to compare the ability of the methods to fit the patients' PSA profiles and to predict disease-specific survival. The RCQ model provided the best fit of the patients' PSA profiles, as determined according to the significance of the added parameters for the RCQ equation (P method, the RCL model, and the RCQ model were highly significant predictors (P method were not found to be significant predictors (P = 0.66). PSADT estimates from the RCQ and RCL models provided an improved correlation of disease-specific survival (both R(2) = 0.55) compared to the FLO (R(2) = 0.11) and BFL (R(2) = 0.003) methods. Random coefficient methods provided a more reliable fit of PSA profiles than other models and were superior to other available models for predicting disease-specific survival in patients with androgen-independent prostate carcinoma. The authors concluded that consideration should be given to applying the RCL or RCQ models in future assessments of PSADT as a predictive parameter.
Benchmarking semantic web technology
García-Castro, R
2009-01-01
This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:
Verification of RRC Ki code package for neutronic calculations of WWER core with GD
International Nuclear Information System (INIS)
Aleshin, S.S.; Bolshagin, S.N.; Lazarenko, A.P.; Markov, A.V.; Pavlov, V.I.; Pavlovitchev, A.M.; Sidorenko, V.D.; Tsvetkov, V.M.
2001-01-01
The report presented is concerned with verification results of TVS-M/PERMAK-A/BIPR-7A code package for WWERs neutronic calculation as applied to calculation of systems containing U-GD pins. The verification is based on corresponded benchmark calculations, data critical experiments and on operation data obtained WWER units with Gd. The comparison results are discussed (Authors)
Geothermal Heat Pump Benchmarking Report
Energy Technology Data Exchange (ETDEWEB)
None
1997-01-17
A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.
Jacob, D; Palacios, J J
2011-01-28
We study the performance of two different electrode models in quantum transport calculations based on density functional theory: parametrized Bethe lattices and quasi-one-dimensional wires or nanowires. A detailed account of implementation details in both the cases is given. From the systematic study of nanocontacts made of representative metallic elements, we can conclude that the parametrized electrode models represent an excellent compromise between computational cost and electronic structure definition as long as the aim is to compare with experiments where the precise atomic structure of the electrodes is not relevant or defined with precision. The results obtained using parametrized Bethe lattices are essentially similar to the ones obtained with quasi-one-dimensional electrodes for large enough cross-sections of these, adding a natural smearing to the transmission curves that mimics the true nature of polycrystalline electrodes. The latter are more demanding from the computational point of view, but present the advantage of expanding the range of applicability of transport calculations to situations where the electrodes have a well-defined atomic structure, as is the case for carbon nanotubes, graphene nanoribbons, or semiconducting nanowires. All the analysis is done with the help of codes developed by the authors which can be found in the quantum transport toolbox ALACANT and are publicly available.
Vver-1000 Mox core computational benchmark
International Nuclear Information System (INIS)
2006-01-01
The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the
Benchmarking in University Toolbox
Directory of Open Access Journals (Sweden)
Katarzyna Kuźmicz
2015-06-01
Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.
Systematic approach to establishing criticality biases
International Nuclear Information System (INIS)
Larson, S.L.
1995-09-01
A systematic approach has been developed to determine benchmark biases and apply those biases to code results to meet the requirements of DOE Order 5480.24 regarding documenting criticality safety margins. Previously, validation of the code against experimental benchmarks to prove reasonable agreement was sufficient. However, DOE Order 5480.24 requires contractors to adhere to the requirements of ANSI/ANS-8.1 and establish subcritical margins. A method was developed to incorporate biases and uncertainties from benchmark calculations into a k eff value with quantifiable uncertainty. The method produces a 95% confidence level in both the k eff value of the scenario modeled and the distribution of the k eff S calculated by the Monte Carlo code. Application of the method to a group of benchmarks modeled using the KENO-Va code and the SCALE 27 group cross sections is also presented
Liu, Nehemiah T; Cancio, Leopoldo C; Salinas, Jose; Batchinsky, Andriy I
2014-04-01
Heart-rate complexity (HRC) has been proposed as a new vital sign for critical care medicine. The purpose of this research was to develop a reliable method for determining HRC continuously in real time in critically ill patients using multiple waveform channels that also compensates for noisy and unreliable data. Using simultaneously acquired electrocardiogram (Leads I, II, V) and arterial blood pressure waveforms sampled at 360 Hz from 250 patients (over 375 h of patient data), we evaluated a new data fusion framework for computing HRC in real time. The framework employs two algorithms as well as signal quality indices. HRC was calculated (via the method of sample entropy), and equivalence tests were then performed. Bland-Altman plots and box plots of differences between mean HRC values were also obtained. Finally, HRC differences were analyzed by paired t tests. The gold standard for obtaining true means was manual verification of R waves and subsequent entropy calculations. Equivalence tests between mean HRC values derived from manually verified sequences and those derived from automatically detected peaks showed that the "Fusion" values were the least statistically different from the gold standard. Furthermore, the fusion of waveform sources produced better error density distributions than those derived from individual waveforms. The data fusion framework was shown to provide in real-time a reliable continuously streamed HRC value, derived from multiple waveforms in the presence of noise and artifacts. This approach will be validated and tested for assessment of HRC in critically ill patients.
International Nuclear Information System (INIS)
Converse, W.E.; Bierman, S.R.
1979-11-01
Calculations have been performed on water mixtures of oxides and nitrates of 233 U, 235 U, and 239 Pu with chemically similar thorium compounds to determine critical dimensions for simple geometries (sphere, cylinder, and slab). Uranium enrichments calculated were 100%, 20%, 10%, and 5%; plutonium calculations assumed 100% 239 Pu. Thorium to uranium or plutonium weight ratios (Th: U or Pu) calculated were 0, 1, 4, and 8. Both bare and full water reflection conditions were calculated. The results of the calculations are plotted showing a critical dimension versus the uranium or plutonium concentration. Plots of K-infinity and material buckling for each material type are also shown
Main factors affecting strong ground motion calculations: Critical review and assessment
International Nuclear Information System (INIS)
Mohammadioun, B.; Pecker, A.
1990-01-01
In the interests of guarding lives and property against the effects of earthquakes, building codes are frequently enforced when erecting conventional structures, usually calling for simple, static calculations. Where more vulnerable facilities are involved, the failure of which, or of parts of which, could subject the environment to harmful substances, more sophisticated methods are used to compute or verify their design, often accompanied by safety margins intended to compensate for uncertainties encountered at various stages of the analysis that begins with input seismic data and culminates with an effective anti-seismic design. The forthcoming discussion will deal with what is known of the characteristics of strong ground motion, highly variable according to context, without entering into the problems raised by seismotectonic studies, which actually constitute the first aspect that must be addressed when performing such an analysis. Our conclusion will be devoted to cogent R and D work in this area
VVER-1000 burnup credit benchmark (CB5). New results evaluation
International Nuclear Information System (INIS)
Manolova, M.; Mihaylov, N.; Prodanova, R.
2008-01-01
The validation of depletion codes is an important task in spent fuel management, especially for burnup credit application in criticality safety analysis of spent fuel facilities. Because of lack of well documented experimental data for VVER-1000, the validation could be made on the basis of code intercomparison based on the numerical benchmark problems. Some years ago a VVER-1000 burnup credit benchmark (CB5) was proposed to the AER research community and the preliminary results from three depletion codes were compared. In the paper some new results for the isotopic concentrations of twelve actinides and fifteen fission products calculated by the depletion codes SCALE5.1, WIMS9, SCALE4.4 and NESSEL-NUKO are compared and evaluated. (authors)
Criticality and Its Uncertainty Analysis of Spent Fuel Storage Rack for Research Reactor
International Nuclear Information System (INIS)
Han, Tae Young; Park, Chang Je; Lee, Byung Chul
2011-01-01
For evaluating the criticality safety of spent fuel storage rack in an open pool type research reactor, a permissible upper limit of criticality should be determined. It can be estimated from the criticality upper limit presented by the regulatory guide and an uncertainty of criticality calculation. In this paper, criticalities for spent fuel storage rack are carried out at various conditions. The calculation uncertainty of MCNP system is evaluated from the calculation results for the benchmark experiments. Then, the upper limit of criticality is determined from the uncertainties and the calculated criticality of the spent fuel storage rack is evaluated
International Nuclear Information System (INIS)
Carluccio, Thiago; Rossi, Pedro Carlos Russo; Maiorino, Jose Rubens
2011-01-01
The YALINA-Booster is an experimental zero power Accelerator Driven Reactor (ADS), which consists of a sub-critical assemby driven by external neutron sources. It has a fast spectrum booster zone in the center, surrounded by a thermal one. The sub-critical core is driven by external neutron sources. Several experiments have been proposed in the framework of IAEA Coordinated Reserch Project (CRP) on ADS. This work shows results obtained by IPEN modelling and simulating experiments proposed at CRP, using the MCNP code. The comparison among our results, the experimental one and the results obtained by other participants is being done by CRP coordinators. This coolaborative work has an important role in the qualification and improvement of calculational methodologies.
Energy Technology Data Exchange (ETDEWEB)
Balanin, A. L.; Boyarinov, V. F.; Glushkov, E. S.; Zimin, A. A.; Kompaniets, G. V.; Nevinitsa, V. A., E-mail: Neviniza-VA@nrcki.ru; Moroz, N. P.; Fomichenko, P. A.; Timoshinov, A. V. [National Research Center Kurchatov Institute (Russian Federation); Volkov, Yu. N. [National Research Nuclear University MEPhI (Russian Federation)
2016-12-15
The application of experimental information on measured axial distributions of fission reaction rates for development of 3D numerical models of the ASTRA critical facility taking into account azimuthal asymmetry of the assembly simulating a HTGR with annular core is substantiated. Owing to the presence of the bottom reflector and the absence of the top reflector, the application of 2D models based on experimentally determined buckling is impossible for calculation of critical assemblies of the ASTRA facility; therefore, an alternative approach based on the application of the extrapolated assembly height is proposed. This approach is exemplified by the numerical analysis of experiments on measurement of efficiency of control rods mockups and protection system (CPS).
The influence of calculation method on estimates of cerebral critical closing pressure.
Panerai, R B; Salinet, A S M; Brodie, F G; Robinson, T G
2011-04-01
The critical closing pressure (CrCP) of cerebral circulation is normally estimated by extrapolation of instantaneous velocity-pressure curves. Different methods of estimation were analysed to assess their robustness and reproducibility in both static and dynamic applications. In ten healthy subjects (mean ± SD age 37.5 ± 9.2 years) continuous recordings of arterial blood pressure (BP, Finapres) and bilateral cerebral blood flow velocity (transcranial Doppler ultrasound, middle cerebral arteries) were obtained at rest. Each session consisted of three separate 5 min recordings. A total of four recording sessions for each subject took place over a 2 week period. A total of 117 recordings contained 34 014 cardiac cycles. For each cardiac cycle, CrCP and resistance-area product (RAP) were estimated using linear regression (LR), principal component analysis (PCA), first harmonic fitting (H1), 2-point systolic/diastolic values (2Ps) and 2-point mean/diastolic values (2Pm). LR and PCA were also applied using only the diastolic phase (LRd, PCAd). The mean values of CrCP and RAP for the entire 5 min recording ('static' condition) were not significantly different for LRd, PCAd, H1 and 2Pm, as opposed to the other methods. The same four methods provided the best results regarding the absence of negative values of CrCP and the coefficient of variation (CV) of the intra-subject standard error of the mean (SEM). On the other hand, 'dynamic' applications, such as the transfer function between mean BP and RAP (coherence and RAP step response) led to a different ranking of methods, but without significant differences in CV SEM coherence. For the CV of the RAP step response though, LRd and PCAd performed badly. These results suggest that H1 or 2Pm perform better than LR analysis and should be used for the estimation of CrCP and RAP for both static and dynamic applications.
Directory of Open Access Journals (Sweden)
R. Fabík
2009-10-01
Full Text Available This paper presents a new model for calculation of critical strain for initialization of dynamic recrystallization. The new model reflects the history of forming in the deformation zone during rolling. In this region of restricted deformation, the strain rate curve for the surface of the strip exhibits two peaks. These are the two reasons why the onset of dynamic recrystallization DRX near the surface of the rolled part occurs later than in theory during strip rolling. The present model had been used in a program for simulation of forming processes with the aid of FEM and a comparison between the physical experiment and a mathematical model had been drawn.
International Nuclear Information System (INIS)
Barranco R, F.
2015-01-01
In this thesis criticality and shielding calculations to evaluate the design of a container of dry storage of spent nuclear fuel generated in research reactors were made. The design of such container was originally proposed by Argentina and Brazil, and the Instituto Nacional de Investigaciones Nucleares (ININ) of Mexico. Additionally, it is proposed to modify the design of this container to store spent fuel 120 that are currently in the pool of TRIGA Mark III reactor, the Nuclear Center of Mexico and calculations and analyzes are made to verify that the settlement of these fuel elements is subcritical limits and dose rates to workers and the general public are not exceeded. These calculations are part of the design criteria for security protection systems in dry storage system (Dss for its acronym in English) proposed by the Nuclear Regulatory Commission (NRC) of the United States. To carry out these calculations simulation codes of Monte Carlo particle transport as MCNPX and MCNP5 were used. The initial design (design 1) 78 intended to store spent fuel with a maximum of 115. The ININ has 120 fuel elements and spent 3 control rods (currently stored in the reactor pool). This leads to the construction of two containers of the original design, but for economic reasons was decided to modify (design 2) to store in a single container. Criticality calculations are performed to 78, 115 and fresh fuel elements 124 within the container, to the two arrangements described in Chapter 4, modeling the three-dimensional geometry assuming normal operating conditions and accident. These calculations are focused to demonstrate that the container will remain subcritical, that is, that the effective multiplication factor is less than 1, in particular not greater than 0.95 (as per specified by the NRC). Spent fuel 78 and 124 within the container, both gamma radiation to neutron shielding calculations for only two cases were simulated. First actinides and fission products generated
Abel, Robert; Wang, Lingle; Mobley, David L; Friesner, Richard A
2017-01-01
Protein-ligand binding is among the most fundamental phenomena underlying all molecular biology, and a greater ability to more accurately and robustly predict the binding free energy of a small molecule ligand for its cognate protein is expected to have vast consequences for improving the efficiency of pharmaceutical drug discovery. We briefly reviewed a number of scientific and technical advances that have enabled alchemical free energy calculations to recently emerge as a preferred approach, and critically considered proper validation and effective use of these techniques. In particular, we characterized a selection bias effect which may be important in prospective free energy calculations, and introduced a strategy to improve the accuracy of the free energy predictions. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
International Nuclear Information System (INIS)
Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.; Graves, H.
1987-01-01
This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described
Reactor physics tests and benchmark analyses of STACY
International Nuclear Information System (INIS)
Miyoshi, Yoshinori; Umano, Takuya
1996-01-01
The Static Experiment Critical Facility, STACY in the Nuclear Fuel Cycle Safety Engineering Research Facility, NUCEF is a solution type critical facility to accumulate fundamental criticality data on uranyl nitrate solution, plutonium nitrate solution and their mixture. A series of critical experiments have been performed for 10 wt% enriched uranyl nitrate solution using a cylindrical core tank. In these experiments, systematic data of the critical height, differential reactivity of the fuel solution, kinetic parameter and reactor power were measured with changing the uranium concentration of the fuel solution from 313 gU/l to 225 gU/l. Critical data through the first series of experiments for the basic core are reported in this paper for evaluating the accuracy of the criticality safety calculation codes. Benchmark calculations of the neutron multiplication factor k eff for the critical condition were made using a neutron transport code TWOTRAN in the SRAC system and a continuous energy Monte Carlo code MCNP 4A with a Japanese evaluated nuclear data library, JENDL 3.2. (J.P.N.)
RISKIND verification and benchmark comparisons
Energy Technology Data Exchange (ETDEWEB)
Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.
1997-08-01
This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.
RISKIND verification and benchmark comparisons
International Nuclear Information System (INIS)
Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.
1997-08-01
This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models
DEFF Research Database (Denmark)
Seabrooke, Leonard; Wigan, Duncan
2015-01-01
Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as impo......Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...
MCNP5 modeling of the IPR-R1 TRIGA reactor for criticality calculation and reactivity determination
International Nuclear Information System (INIS)
Silva, Clarysson A.M. da; Pereira, Claubia; Guerra, Bruno T.; Veloso, Maria Auxiliadora F.; Costa, Antonella L.; Dalle, Hugo M.
2011-01-01
Highlights: ► Two models of IPR-R1 TRIGA using the MCNP5 code were simulated. ► It obtained k eff values in some different situations of the reactor operation. ► The first model analyzes the criticality and the neutronic flux over the reactor. ► The second model includes the radial and axial neutron flux evaluation with different operation conditions. ► The results present good agreement with respect to the experimental data. - Abstract: The IPR-R1 TRIGA is a research nuclear reactor managed and located at the Nuclear Technology Development Center (CDTN) a research institute of the Brazilian Nuclear Energy Commission (CNEN). It is mainly used to radioisotopes production, scientific experiments, training of nuclear engineers for research and nuclear power plant reactor operation, experiments with materials and minerals and neutron activation analysis. In this work, criticality calculation and reactivity changes are presented and discussed using two modelings of the IPR-R1 TRIGA in the MCNP5 code. The first model (Model 1) analyzes the criticality over the reactor. On the other hand, the second model (Model 2) includes the possibility of radial and axial neutron flux evaluation with different operation conditions. The calculated results are compared with experimental data in different situations. For the two models, the standard deviation and relative error presented values of around 4.9 × 10 −4 . Both models present good agreement with respect to the experimental data. The goal is to validate the models that could be used to determine the neutron flux profiles to optimize the irradiation conditions, as well as to study reactivity insertion experiments and also to evaluate the fuel composition.
Energy Technology Data Exchange (ETDEWEB)
Suter, G.W. II [Oak Ridge National Lab., TN (United States); Tsao, C.L. [Duke Univ., Durham, NC (United States). School of the Environment
1996-06-01
This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more complete documentation of the sources and derivation of all values are presented.
DEFF Research Database (Denmark)
Bogetoft, Peter; Nielsen, Kurt
2005-01-01
We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...
Benchmark risk analysis models
Ale BJM; Golbach GAM; Goos D; Ham K; Janssen LAM; Shield SR; LSO
2002-01-01
A so-called benchmark exercise was initiated in which the results of five sets of tools available in the Netherlands would be compared. In the benchmark exercise a quantified risk analysis was performed on a -hypothetical- non-existing hazardous establishment located on a randomly chosen location in
P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel
1998-01-01
textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It
Bogetoft, Peter; Nielsen, Kurt
2002-01-01
We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as non-parametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore alternative improvement strategies. An implementation of both a parametric and a non parametric model are presented.
Aksenov, Andrey; Malysheva, Anna
2018-03-01
An exact calculation of the heat exchange of evaporative surfaces is possible only if the physical processes of hydrodynamics of two-phase flows are considered in detail. Especially this task is relevant for the design of refrigeration supply systems for high-rise buildings, where powerful refrigeration equipment and branched networks of refrigerants are used. On the basis of experimental studies and developed mathematical model of asymmetric dispersed-annular flow of steam-water flow in horizontal steam-generating pipes, a calculation formula has been obtained for determining the boundaries of the zone of improved heat transfer and the critical value of the heat flux density. A new theoretical approach to the solution of the problem of the flow structure of a two-phase flow is proposed. The applied method of dissipative characteristics of a two-phase flow in pipes and the principle of a minimum rate of entropy increase in stabilized flows made it possible to obtain formulas that directly reflect the influence of the viscous characteristics of the gas and liquid media on their distribution in the flow. The study showed a significant effect of gravitational forces on the nature of the phase distribution in the cross section of the evaporative tubes. At a mass velocity of a two-phase flow less than 700 kg / m2s, the volume content of the liquid phase near the upper outer generating lines of the tube is almost an order of magnitude lower than the lower one. The calculation of the heat transfer crisis in horizontal evaporative tubes is obtained. The calculated dependence is in good agreement with the experimental data of the author and a number of foreign researchers. The formula generalizes the experimental data for pipes with the diameter of 6-40 mm in the pressure of 2-7 MPa.
Resolution for the Loviisa benchmark problem
International Nuclear Information System (INIS)
Garcia, C.R.; Quintero, R.; Milian, D.
1992-01-01
In the present paper, the Loviisa benchmark problem for cycles 11 and 8, and reactor blocks 1 and 2 from Loviisa NPP, is calculated. This problem user law leakage reload patterns and was posed at the second thematic group of TIC meeting held in Rheinsberg GDR, march 1989. SPPS-1 coarse mesh code has been used for the calculations
DEFF Research Database (Denmark)
Bigoni, Daniele; Engsig-Karup, Allan Peter; True, Hans
2012-01-01
This paper describes the results of the application of Uncertainty Quantification methods to a railway vehicle dynamical example. Uncertainty Quantification methods take the probability distribution of the system parameters that stems from the parameter tolerances into account in the result....... In this paper the methods are applied to a lowdimensional vehicle dynamical model composed by a two-axle bogie, which is connected to a car body by a lateral linear spring, a lateral damper and a torsional spring. Their characteristics are not deterministically defined, but they are defined by probability...... distributions. The model - but with deterministically defined parameters - was studied in [1], and this article will focus on the calculation of the critical speed of the model, when the distribution of the parameters is taken into account. Results of the application of the traditional Monte Carlo sampling...
International Nuclear Information System (INIS)
Silva, M
2006-01-01
The Atucha I Nuclear Power Plant (CNA-I) has enough room to store its spent fuel (SF) in damp in its two pool houses until the middle of 2015.Before that date there is the need to have an interim dry storage system for spent fuel that would make possible to empty at least one of the pools, whether to keep the plant operating if its useful life is extended, or to be able to empty the reactor core in case of decommissioning.Nucleolectrica Argentina S.A. (NA-SA) and the Comision Nacional de Energia Atomica (CNEA), due to their joint responsibility in the management of the SF, have proposed interim dry storage systems.These systems have to be evaluated in order to choose one of them by the end of 2006.In this work the Monte Carlo code MCNP was used to make the criticality and shielding calculations corresponding to the model proposed by CNEA.This model suggests the store of sealed containers with 36 or 37 SF in concrete modules.Each one of the containers is filled in the pool houses and transported to the module in a transference cask with lead walls.The results of the criticality calculations indicates that the solutions of SF proposed have widely fulfilled the requirements of subcriticality, even in supposed extreme accidental situations.Regarding the transference cask, the SF dose rate estimations allow us to make a feedback for the design aiming to the geometry and shielding improvements.Regarding the store modules, thicknesses ranges of concrete walls are suggested in order to fulfill the dose requirements stated by the Autoridad Regulatoria Nuclear Argentina [es
Preliminary analysis of the proposed BN-600 benchmark core
International Nuclear Information System (INIS)
John, T.M.
2000-01-01
The Indira Gandhi Centre for Atomic Research is actively involved in the design of Fast Power Reactors in India. The core physics calculations are performed by the computer codes that are developed in-house or by the codes obtained from other laboratories and suitably modified to meet the computational requirements. The basic philosophy of the core physics calculations is to use the diffusion theory codes with the 25 group nuclear cross sections. The parameters that are very sensitive is the core leakage, like the power distribution at the core blanket interface etc. are calculated using transport theory codes under the DSN approximations. All these codes use the finite difference approximation as the method to treat the spatial variation of the neutron flux. Criticality problems having geometries that are irregular to be represented by the conventional codes are solved using Monte Carlo methods. These codes and methods have been validated by the analysis of various critical assemblies and calculational benchmarks. Reactor core design procedure at IGCAR consists of: two and three dimensional diffusion theory calculations (codes ALCIALMI and 3DB); auxiliary calculations, (neutron balance, power distributions, etc. are done by codes that are developed in-house); transport theory corrections from two dimensional transport calculations (DOT); irregular geometry treated by Monte Carlo method (KENO); cross section data library used CV2M (25 group)
Benchmark thermal-hydraulic analysis with the Agathe Hex 37-rod bundle
International Nuclear Information System (INIS)
Barroyer, P.; Hudina, M.; Huggenberger, M.
1981-09-01
Different computer codes are compared, in prediction performance, based on the AGATHE HEX 37-rod bundle experimental results. The compilation of all available calculation results allows a critical assessment of the codes. For the time being, it is concluded which codes are best suited for gas cooled fuel element design purposes. Based on the positive aspects of these cooperative Benchmark exercises, an attempt is made to define a computer code verification procedure. (Auth.)
Workshop: Monte Carlo computational performance benchmark - Contributions
International Nuclear Information System (INIS)
Hoogenboom, J.E.; Petrovic, B.; Martin, W.R.; Sutton, T.; Leppaenen, J.; Forget, B.; Romano, P.; Siegel, A.; Hoogenboom, E.; Wang, K.; Li, Z.; She, D.; Liang, J.; Xu, Q.; Qiu, Y.; Yu, J.; Sun, J.; Fan, X.; Yu, G.; Bernard, F.; Cochet, B.; Jinaphanh, A.; Jacquet, O.; Van der Marck, S.; Tramm, J.; Felker, K.; Smith, K.; Horelik, N.; Capellan, N.; Herman, B.
2013-01-01
This series of slides is divided into 3 parts. The first part is dedicated to the presentation of the Monte-Carlo computational performance benchmark (aims, specifications and results). This benchmark aims at performing a full-size Monte Carlo simulation of a PWR core with axial and pin-power distribution. Many different Monte Carlo codes have been used and their results have been compared in terms of computed values and processing speeds. It appears that local power values mostly agree quite well. The first part also includes the presentations of about 10 participants in which they detail their calculations. In the second part, an extension of the benchmark is proposed in order to simulate a more realistic reactor core (for instance non-uniform temperature) and to assess feedback coefficients due to change of some parameters. The third part deals with another benchmark, the BEAVRS benchmark (Benchmark for Evaluation And Validation of Reactor Simulations). BEAVRS is also a full-core PWR benchmark for Monte Carlo simulations
International Nuclear Information System (INIS)
Jinaphanh, A.
2012-01-01
Monte Carlo criticality calculation allows to estimate the effective multiplication factor as well as local quantities such as local reaction rates. Some configurations presenting weak neutronic coupling (high burn up profile, complete reactor core,...) may induce biased estimations for k eff or reaction rates. In order to improve robustness of the iterative Monte Carlo methods, a coupling with a deterministic code was studied. An adjoint flux is obtained by a deterministic calculation and then used in the Monte Carlo. The initial guess is then automated, the sampling of fission sites is modified and the random walk of neutrons is modified using splitting and russian roulette strategies. An automated convergence detection method has been developed. It locates and suppresses the transient due to the initialization in an output series, applied here to k eff and Shannon entropy. It relies on modeling stationary series by an order 1 auto regressive process and applying statistical tests based on a Student Bridge statistics. This method can easily be extended to every output of an iterative Monte Carlo. Methods developed in this thesis are tested on different test cases. (author)
Toxicological Benchmarks for Wildlife
Energy Technology Data Exchange (ETDEWEB)
Sample, B.E. Opresko, D.M. Suter, G.W.
1993-01-01
Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red
International Nuclear Information System (INIS)
Glazunov, V.O.; Rusyaev, R.V.
1989-01-01
The problem of determination of radioactivity critical level in a sample by means of gamma spectrometer with semiconductor detector is studied theoretically. The formula for critical level, which shows that it is necessary to know the background pulse counting rate in order to determine the minimum gamma photon pulse counting rates, is derived. Calculations of critical level for the Chernobyl' conditions in time period from October 1986 till July 1987 are made. 8 refs.; 7 figs.; 17 tabs
Energy Technology Data Exchange (ETDEWEB)
Hannstein, Volker; Sommer, Fabian
2017-05-15
The report summarizes the performed studies and results in the frame of the phase II benchmarks of the expert group of used nuclear fuel (EGUNF) of the working party of nuclear criticality safety (WPNCS) of the nuclear energy agency (NEA) of the organization for economic co-operation and development (OECD). The studies specified within the benchmarks have been realized to the full extent. The scope of the benchmarks was the comparison of a generic BWR fuel element with gadolinium containing fuel rods with several computer codes and cross section libraries of different international working groups and institutions. The used computational model allows the evaluation of the accuracy of fuel rod and their influence of the inventory calculations and the respective influence on BWR burnout credit calculations.
Validation of the WIMSD4M cross-section generation code with benchmark results
Energy Technology Data Exchange (ETDEWEB)
Deen, J.R.; Woodruff, W.L. [Argonne National Lab., IL (United States); Leal, L.E. [Oak Ridge National Lab., TN (United States)
1995-01-01
The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section libraries for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.
Validation of the WIMSD4M cross-section generation code with benchmark results
International Nuclear Information System (INIS)
Deen, J.R.; Woodruff, W.L.; Leal, L.E.
1995-01-01
The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section libraries for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D 2 O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented
International Nuclear Information System (INIS)
Antunes, Alberi
2008-01-01
This work presents the Physics of Source Driven Systems (ADS). It shows some statics and K i netics parameters of the reactor Physics and when it is sub critical, that are important in evaluation and definition of these systems. The objective is to demonstrate that there are differences in parameters when the reactor is critical. Moreover, the work shows the differences observed in the parameters for different calculation models. Two calculation methodologies are shown In this dissertation: Gandini and Salvatores and Dulla, and some parameters are calculated. The ANISN deterministic transport code is used in calculation in order to compare these parameters. In a subcritical configuration of IPEN-MB-01 Reactor driven by an external source some parameters are calculated. The conclusions about calculation realized are presented in end of work. (author)
Financial Integrity Benchmarks
City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....
Han, Rui; Lu, Xiaoyi
2014-01-01
Big data systems address the challenges of capturing, storing, managing, analyzing, and visualizing big data. Within this context, developing benchmarks to evaluate and compare big data systems has become an active topic for both research and industry communities. To date, most of the state-of-the-art big data benchmarks are designed for specific types of systems. Based on our experience, however, we argue that considering the complexity, diversity, and rapid evolution of big data systems, fo...
Cantinotti, Massimiliano; Giordano, Raffaele; Paterni, Marco; Saura, Daniel; Scalese, Marco; Franchi, Eliana; Assanta, Nadia; Koestenberg, Martin; Dulgheru, Raluca; Sugimoto, Tadafumi; Bernard, Anne; Caballero, Luis; Lancellotti, Patrizio
2017-12-01
There is a crescent interest on normal adult echocardiographic values and the introduction of new deformation imaging and 3D parameters pose the issue of normative data. A multitude of nomograms has been recently published, however data are often fragmentary, difficult to find, and their strengths/limitations have been never evaluated. (I) to provide a review of current echocardiographic nomograms; (II) to generate a tool for easy and fast access to these data. A literature search was conducted accessing the National Library of Medicine using the keywords: 2D/3D echocardiography, strain, left/right ventricle, atrial, mitral/tricuspid valve, aorta, reference values/nomograms/normal values. Adding the following keywords, the results were further refined: range/intervals, myocardial velocity, strain rate and speckle tracking. Forty one published studies were included. Our study reveals that for several of 2D/3D parameters sufficient normative data exist, however, a few limitations still persist. For some basic parameters (i.e., mitral/tricuspid/pulmonary valves, great vessels) and for 3D valves data are scarce. There is a lack of studies evaluating ethnic differences. Data have been generally expressed as mean values normalised for gender and age instead of computing models incorporating different variables (age/gender/body sizes) to calculate z scores. To summarize results a software ( Echocardio-Normal Values ) who automatically calculate range of normality for a broad range of echocardiographic measurements according to age/gender/weight/height, has been generated. We provide an up-to-date and critical review of strengths/limitation of current adult echocardiographic nomograms. Furthermore we generated a software for automatic, easy and fast access to multiple echocardiographic normative data.
Benchmarking the Netherlands. Benchmarking for growth
International Nuclear Information System (INIS)
2003-01-01
This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout
A framework for benchmarking land models
Directory of Open Access Journals (Sweden)
Y. Q. Luo
2012-10-01
Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties
The impact and applicability of critical experiment evaluations
International Nuclear Information System (INIS)
Brewer, R.
1997-01-01
This paper very briefly describes a project to evaluate previously performed critical experiments. The evaluation is intended for use by criticality safety engineers to verify calculations, and may also be used to identify data which need further investigation. The evaluation process is briefly outlined; the accepted benchmark critical experiments will be used as a standard for verification and validation. The end result of the project will be a comprehensive reference document
Vries, de W.; Schütze, G.; Lofts, S.; Tipping, E.; Meili, M.; Römkens, P.F.A.M.; Groenenberg, J.E.
2005-01-01
This report on heavy metals provides up-to-date methodologies to derive critical loads for the heavy metals cadmium (Cd), lead (Pb) and mercury (Hg) for both terrestrial and aquatic ecosystems. It presents background information to a Manual on Critical Loads for those metals. Focus is given to the
Fast critical experiment data for space reactors
International Nuclear Information System (INIS)
Collins, P.J.; McFarlane, H.F.; Olsen, D.N.; Atkinson, C.A.; Ross, J.R.
1987-01-01
Data from a number of previous critical experiments exist that are relevant to the design concepts being considered for SP-100 and MMW space reactors. Although substantial improvements in experiment techniques have since made some of the measured quantities somewhat suspect, the basic criticality data are still useful in most cases. However, the old experiments require recalculation with modern computational methods and nuclear cross section data before they can be applied to today's designs. Recently, we have calculated about 20 fast benchmark critical experiments with the latest ENDF/B data and modern transport codes. These calculations were undertaken as a part of the planning process for a new series of benchmark experiments aimed at supporting preliminary designs of SP-100 and MMW space reactors
Energy Technology Data Exchange (ETDEWEB)
Behler, Matthias
2016-05-15
Beside radiochemical analysis of irradiated fuel and critical experiments, which has become a well-established basis for the validation of depletion code and criticality codes respectively, also results of oscillation experiments or the operating conditions of power reactor and research reactors can provide useful information for the validation of the above mentioned codes. Based on a literature review the potential of the utilization of oscillation experiment measurements for the validation of criticality codes is estimated. It is found that the reactivity measurements for actinides and fission products within the CERES program on the reactors DIMPLE (Winfrith, UK) and MINERVE (Cadarache, France) can give a valuable addition to the commonly used critical experiments for criticality code validation. However, there are approaches but yet no generally satisfactory solution for integrating the reactivity measurements in a quantitative bias determination for the neutron multiplication factor of typical application cases including irradiated spent fuel outside reactor cores, calculated using common criticality codes.
PNNL Information Technology Benchmarking
Energy Technology Data Exchange (ETDEWEB)
DD Hostetler
1999-09-08
Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.
International Nuclear Information System (INIS)
Fabry, A.; McElroy, W.N.; Kellogg, L.S.; Lippincott, E.P.; Grundl, J.A.; Gilliam, D.M.; Hansen, G.E.
1976-01-01
This paper is intended to review and critically discuss microscopic integral cross section measurement and calculation data for fundamental reactor dosimetry benchmark neutron fields. Specifically the review covers the following fundamental benchmarks: the spontaneous californium-252 fission neutron spectrum standard field; the thermal-neutron induced uranium-235 fission neutron spectrum standard field; the (secondary) intermediate-energy standard neutron field at the center of the Mol-ΣΣ, NISUS, and ITN-ΣΣ facilities; the reference neutron field at the center of the Coupled Fast Reactor Measurement Facility; the reference neutron field at the center of the 10% enriched uranium metal, cylindrical, fast critical; the (primary) Intermediate-Energy Standard Neutron Field
International Nuclear Information System (INIS)
Fabry, A.; McElroy, W.N.; Kellogg, L.S.; Lippincott, E.P.; Grundl, J.A.; Gilliam, D.M.; Hansen, G.E.
1976-10-01
The paper is intended to review and critically discuss microscopic integral cross section measurement and calculation data for fundamental reactor dosimetry benchmark neutron fields. Specifically the review covers the following fundamental benchmarks: (1) the spontaneous californium-252 fission neutron spectrum standard field; (2) the thermal-neutron induced uranium-235 fission neutron spectrum standard field; (3) the (secondary) intermediate-energy standard neutron field at the center of the Mol-ΣΣ, NISUS, and ITN--ΣΣ facilities; (4) the reference neutron field at the center of the Coupled Fast Reactor Measurement Facility (CFRMF); (5) the reference neutron field at the center of the 10 percent enriched uranium metal, cylindrical, fast critical; and (6) the (primary) Intermediate-Energy Standard Neutron Field
Benchmarking the UAF Tsunami Code
Nicolsky, D.; Suleimani, E.; West, D.; Hansen, R.
2008-12-01
We have developed a robust numerical model to simulate propagation and run-up of tsunami waves in the framework of non-linear shallow water theory. A temporal position of the shoreline is calculated using the free-surface moving boundary condition. The numerical code adopts a staggered leapfrog finite-difference scheme to solve the shallow water equations formulated for depth-averaged water fluxes in spherical coordinates. To increase spatial resolution, we construct a series of telescoping embedded grids that focus on areas of interest. For large scale problems, a parallel version of the algorithm is developed by employing a domain decomposition technique. The developed numerical model is benchmarked in an exhaustive series of tests suggested by NOAA. We conducted analytical and laboratory benchmarking for the cases of solitary wave runup on simple and composite beaches, run-up of a solitary wave on a conical island, and the extreme runup in the Monai Valley, Okushiri Island, Japan, during the 1993 Hokkaido-Nansei-Oki tsunami. Additionally, we field-tested the developed model to simulate the November 15, 2006 Kuril Islands tsunami, and compared the simulated water height to observations at several DART buoys. In all conducted tests we calculated a numerical solution with an accuracy recommended by NOAA standards. In this work we summarize results of numerical benchmarking of the code, its strengths and limits with regards to reproduction of fundamental features of coastal inundation, and also illustrate some possible improvements. We applied the developed model to simulate potential inundation of the city of Seward located in Resurrection Bay, Alaska. To calculate an aerial extent of potential inundation, we take into account available near-shore bathymetry and inland topography on a grid of 15 meter resolution. By choosing several scenarios of potential earthquakes, we calculated the maximal aerial extent of Seward inundation. As a test to validate our model, we
DEFF Research Database (Denmark)
Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela
This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm...... survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...
DEFF Research Database (Denmark)
Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela
survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions......This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...
Helium generation reaction rates for 6Li and 10B in benchmark facilities
International Nuclear Information System (INIS)
Farrar, Harry IV; Oliver, B.M.; Lippincott, E.P.
1980-01-01
The helium generation rates for 10 B and 6 Li have been measured in two benchmark reactor facilities having neutron spectra similar to those found in a breeder reactor. The irradiations took place in the Coupled Fast Reactivity Measurements Facility (CFRMF) and in the 10% enriched 235 U critical assembly, BIG-10. The helium reaction rates were obtained by precise high-sensitivity gas mass spectrometric analyses of the helium content of numerous small samples. Comparison of these reaction rates with other reaction rates measured in the same facilities, and with rates calculated from published cross sections and from best estimates of the neutron spectral shapes, indicate significant discrepancies in the calculated values. Additional irradiations in other benchmark facilities have been undertaken to better determine the energy ranges where the discrepancies lie
Benchmark experiments to test plutonium and stainless steel cross sections. Topical report
International Nuclear Information System (INIS)
Jenquin, U.P.; Bierman, S.R.
1978-06-01
The Nuclear Regulatory Commission (NRC) commissioned Battelle, Pacific Northwest Laboratory (PNL) to ascertain the accuracy of the neutron cross sections for the isotopes of plutonium and the constituents of stainless steel and determine if improvements can be made in application to criticality safety analysis. NRC's particular area of interest is in the transportation of light-water reactor spent fuel assemblies. The project was divided into two tasks. The first task was to define a set of integral experimental measurements (benchmarks). The second task is to use these benchmarks in neutronics calculations such that the accuracy of ENDF/B-IV plutonium and stainless steel cross sections can be assessed. The results of the first task are given in this report. A set of integral experiments most pertinent to testing the cross sections has been identified and the code input data for calculating each experiment has been developed
Computational benchmark problem for deep penetration in iron
International Nuclear Information System (INIS)
Hendricks, J.S.; Carter, L.L.
1980-01-01
A calculational benchmark problem which is simple to model and easy to interpret is described. The benchmark consists of monoenergetic 2-, 4-, or 40-MeV neutrons normally incident upon a 3-m-thick pure iron slab. Currents, fluxes, and radiation doses are tabulated throughout the slab
Benchmarking for Best Practice
Zairi, Mohamed
1998-01-01
Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l
HPCG Benchmark Technical Specification
Energy Technology Data Exchange (ETDEWEB)
Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)
2013-10-01
The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.
Energy Technology Data Exchange (ETDEWEB)
Bess, John D.; Briggs, J. Blair; Ivanova, Tatiana; Hill, Ian; Gulliford, Jim
2017-02-01
In the past several decades, numerous experiments have been performed worldwide to support reactor operations, measurements, design, and nuclear safety. Those experiments represent an extensive international investment in infrastructure, expertise, and cost, representing significantly valuable resources of data supporting past, current, and future research activities. Those valuable assets represent the basis for recording, development, and validation of our nuclear methods and integral nuclear data [1]. The loss of these experimental data, which has occurred all too much in the recent years, is tragic. The high cost to repeat many of these measurements can be prohibitive, if not impossible, to surmount. Two international projects were developed, and are under the direction of the Organisation for Co-operation and Development Nuclear Energy Agency (OECD NEA) to address the challenges of not just data preservation, but evaluation of the data to determine its merit for modern and future use. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was established to identify and verify comprehensive critical benchmark data sets; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data [2]. Similarly, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications [3]. Annually, contributors from around the world continue to collaborate in the evaluation and review of select benchmark experiments for preservation and dissemination. The extensively peer-reviewed integral benchmark data can then be utilized to support nuclear design and safety analysts to validate the analytical tools, methods, and data needed for next
EBR-II Reactor Physics Benchmark Evaluation Report
Energy Technology Data Exchange (ETDEWEB)
Pope, Chad L. [Idaho State Univ., Pocatello, ID (United States); Lum, Edward S [Idaho State Univ., Pocatello, ID (United States); Stewart, Ryan [Idaho State Univ., Pocatello, ID (United States); Byambadorj, Bilguun [Idaho State Univ., Pocatello, ID (United States); Beaulieu, Quinton [Idaho State Univ., Pocatello, ID (United States)
2017-12-28
This report provides a reactor physics benchmark evaluation with associated uncertainty quantification for the critical configuration of the April 1986 Experimental Breeder Reactor II Run 138B core configuration.
Critical assembly of uranium enriched to 10% in uranium-235
International Nuclear Information System (INIS)
Hansen, G.E.; Paxton, H.E.
1979-01-01
Big Ten is described in the detail appropriate for a benchmark critical assembly. Characteristics provided are spectral indexes and a detailed neutron flux spectrum, Rossi-α on a reactivity scale established by positive periods, and reactivity coefficients of a variety of isotopes, including the fissionable materials. The observed characteristics are compared with values calculated with ENDF/B-IV cross sections
International Nuclear Information System (INIS)
Pelloni, S.; Grimm, P.; Mathews, D.; Paratte, J.M.
1989-06-01
In this report the capability of various code systems widely used at PSI (such as WIMS-D, BOXER, and the AARE modules TRAMIX and MICROX-2 in connection with the one-dimensional transport code ONEDANT) and JEF-1 based nuclear data libraries to compute LWR lattices is analysed by comparing results from thermal reactor benchmarks TRX and BAPL with experiment and with previously published values. It is shown that with the JEF-1 evaluation eigenvalues are generally well predicted within 8 mk (1 mk = 0.001) or less by all code systems, and that all methods give reasonable results for the measured reaction rate within or not too far from the experimental uncertainty. This is consistent with previous similar studies. (author) 7 tabs., 36 refs
DEFF Research Database (Denmark)
Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius
2006-01-01
takes into account that the available positions of the moving objects are inaccurate, an aspect largely ignored in previous indexing research. The concepts of data and query enlargement are introduced for addressing inaccuracy. As proof of concepts of the benchmark, the paper covers the application...
Benchmarking of workplace performance
van der Voordt, Theo; Jensen, Per Anker
2017-01-01
This paper aims to present a process model of value adding corporate real estate and facilities management and to discuss which indicators can be used to measure and benchmark workplace performance.
In order to add value to the organisation, the work environment has to provide value for
Energy Technology Data Exchange (ETDEWEB)
Pena-Monferrer, C.; Miquel veyrat, A.; Munoz-Cobo, J. L.; Chiva Vicent, S.
2016-08-01
In the recent years, due, among others, the slowing down of the nuclear industry, investment in the development and validation of CFD codes, applied specifically to the problems of the nuclear industry has been seriously hampered. Thus the International Benchmark Exercise (IBE) sponsored by the OECD/NEA have been fundamental to analyze the use of CFD codes in the nuclear industry, because although these codes are mature in many fields, still exist doubts about them in critical aspects of thermohydraulic calculations, even in single-phase scenarios. The Polytechnic University of Valencia (UPV) and the Universitat Jaume I (UJI), sponsored by the Nuclear Safety Council (CSN), have actively participated in all benchmark's proposed by NEA, as in the expert meetings,. In this paper, a summary of participation in the various IBE will be held, describing the benchmark itself, the CFD model created for it, and the main conclusions. (Author)
Reevaluation of the Jezebel Benchmark
Energy Technology Data Exchange (ETDEWEB)
Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-03-10
Every nuclear engineering student is familiar with Jezebel, the homogeneous bare sphere of plutonium first assembled at Los Alamos in 1954-1955. The actual Jezebel assembly was neither homogeneous, nor bare, nor spherical; nor was it singular – there were hundreds of Jezebel configurations assembled. The Jezebel benchmark has been reevaluated for the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Logbooks, original drawings, mass accountability statements, internal reports, and published reports have been used to model four actual three-dimensional Jezebel assemblies with high fidelity. Because the documentation available today is often inconsistent, three major assumptions were made regarding plutonium part masses and dimensions. The first was that the assembly masses given in Los Alamos report LA-4208 (1969) were correct, and the second was that the original drawing dimension for the polar height of a certain major part was correct. The third assumption was that a change notice indicated on the original drawing was not actually implemented. This talk will describe these assumptions, the alternatives, and the implications. Since the publication of the 2013 ICSBEP Handbook, the actual masses of the major components have turned up. Our assumption regarding the assembly masses was proven correct, but we had the mass distribution incorrect. Work to incorporate the new information is ongoing, and this talk will describe the latest assessment.
Full sphere hydrodynamic and dynamo benchmarks
Marti, P.
2014-01-26
Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.
Higashi, Hidenori; Oda, Tsuyoshi; Iwai, Yoshio; Arai, Yasuhiko
2004-01-01
A non-equilibrium molecular dynamics simulation was adopted to calculate the diffusion coefficients for a pseudo-binary system of carbon dioxide and for a carbon dioxide + solute system at 308.2 and 318.2K. The calculated results were compared with the self- and tracer diffusion coefficients calculated by an equilibrium molecular dynamics simulation. The simulated results for the pseudo-binary system of carbon dioxide by the non-equilibrium molecular dynamics simulation are in good agreement ...
Neutronic computational modeling of the ASTRA critical facility using MCNPX
International Nuclear Information System (INIS)
Rodriguez, L. P.; Garcia, C. R.; Milian, D.; Milian, E. E.; Brayner, C.
2015-01-01
The Pebble Bed Very High Temperature Reactor is considered as a prominent candidate among Generation IV nuclear energy systems. Nevertheless the Pebble Bed Very High Temperature Reactor faces an important challenge due to the insufficient validation of computer codes currently available for use in its design and safety analysis. In this paper a detailed IAEA computational benchmark announced by IAEA-TECDOC-1694 in the framework of the Coordinated Research Project 'Evaluation of High Temperature Gas Cooled Reactor (HTGR) Performance' was solved in support of the Generation IV computer codes validation effort using MCNPX ver. 2.6e computational code. In the IAEA-TECDOC-1694 were summarized a set of four calculational benchmark problems performed at the ASTRA critical facility. Benchmark problems include criticality experiments, control rod worth measurements and reactivity measurements. The ASTRA Critical Facility at the Kurchatov Institute in Moscow was used to simulate the neutronic behavior of nuclear pebble bed reactors. (Author)
Performance Targets and External Benchmarking
DEFF Research Database (Denmark)
Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.
Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...
Storage-Intensive Supercomputing Benchmark Study
Energy Technology Data Exchange (ETDEWEB)
Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A
2007-10-30
Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows
Energy Technology Data Exchange (ETDEWEB)
DeHart, Mark D. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mausolff, Zander [Univ. of Florida, Gainesville, FL (United States); Weems, Zach [Univ. of Florida, Gainesville, FL (United States); Popp, Dustin [Univ. of Florida, Gainesville, FL (United States); Smith, Kristin [Univ. of Florida, Gainesville, FL (United States); Shriver, Forrest [Univ. of Florida, Gainesville, FL (United States); Goluoglu, Sedat [Univ. of Florida, Gainesville, FL (United States); Prince, Zachary [Texas A & M Univ., College Station, TX (United States); Ragusa, Jean [Texas A & M Univ., College Station, TX (United States)
2016-08-01
One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\cite{Rattlesnake} and the fuels performance code BISON. Other validation projects outside of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.
International Nuclear Information System (INIS)
DeHart, Mark D.; Mausolff, Zander; Weems, Zach; Popp, Dustin; Smith, Kristin; Shriver, Forrest; Goluoglu, Sedat; Prince, Zachary; Ragusa, Jean
2016-01-01
One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\citelesnake) and the fuels performance code BISON. Other validation projects outside of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.
International Nuclear Information System (INIS)
Koponen, B.L.; Hampel, V.E.
1982-01-01
This compilation contains 688 complete summaries of papers on nuclear criticality safety as presented at meetings of the American Nuclear Society (ANS). The selected papers contain criticality parameters for fissile materials derived from experiments and calculations, as well as criticality safety analyses for fissile material processing, transport, and storage. The compilation was developed as a component of the Nuclear Criticality Information System (NCIS) now under development at the Lawrence Livermore National Laboratory. The compilation is presented in two volumes: Volume 1 contains a directory to the ANS Transaction volume and page number where each summary was originally published, the author concordance, and the subject concordance derived from the keyphrases in titles. Volume 2 contains-in chronological order-the full-text summaries, reproduced here by permission of the American Nuclear Society from their Transactions, volumes 1-41
Energy Technology Data Exchange (ETDEWEB)
Koponen, B.L.; Hampel, V.E.
1982-10-21
This compilation contains 688 complete summaries of papers on nuclear criticality safety as presented at meetings of the American Nuclear Society (ANS). The selected papers contain criticality parameters for fissile materials derived from experiments and calculations, as well as criticality safety analyses for fissile material processing, transport, and storage. The compilation was developed as a component of the Nuclear Criticality Information System (NCIS) now under development at the Lawrence Livermore National Laboratory. The compilation is presented in two volumes: Volume 1 contains a directory to the ANS Transaction volume and page number where each summary was originally published, the author concordance, and the subject concordance derived from the keyphrases in titles. Volume 2 contains-in chronological order-the full-text summaries, reproduced here by permission of the American Nuclear Society from their Transactions, volumes 1-41.
Energy Technology Data Exchange (ETDEWEB)
2017-08-01
AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the BoomerAMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL and is very similar to the AMG2013 benchmark with additional optimizations. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem with a 27-point stencil, which can be scaled up and is designed to solve a very large problem. A second problem simulates a time dependent problem, in which successively various smnllcr systems are solved.
2007-10-01
frequenciesfoeahpbeswllsa"gdnsmtrc fo eah/Rbe. /Qthe acuation are de fiamn aprltmethod raetheorta cmiurve fTtn,wihe ies whynee select ful cycle wisdoimporat tob...See Figure 22 for a comparison of measured waves, linear waves, and non- linear Stokes waves. Looking at the selected 16 runs from the trough-to-peak...Figure 23 for the benchmark data set, the relation of obtained frequency verses desired frequency is almost completely linear . The slight variation at
KENO-VI Primer: A Primer for Criticality Calculations with SCALE/KENO-VI Using GeeWiz
Energy Technology Data Exchange (ETDEWEB)
Bowman, Stephen M [ORNL
2008-09-01
The SCALE (Standardized Computer Analyses for Licensing Evaluation) computer software system developed at Oak Ridge National Laboratory is widely used and accepted around the world for criticality safety analyses. The well-known KENO-VI three-dimensional Monte Carlo criticality computer code is one of the primary criticality safety analysis tools in SCALE. The KENO-VI primer is designed to help a new user understand and use the SCALE/KENO-VI Monte Carlo code for nuclear criticality safety analyses. It assumes that the user has a college education in a technical field. There is no assumption of familiarity with Monte Carlo codes in general or with SCALE/KENO-VI in particular. The primer is designed to teach by example, with each example illustrating two or three features of SCALE/KENO-VI that are useful in criticality analyses. The primer is based on SCALE 6, which includes the Graphically Enhanced Editing Wizard (GeeWiz) Windows user interface. Each example uses GeeWiz to provide the framework for preparing input data and viewing output results. Starting with a Quickstart section, the primer gives an overview of the basic requirements for SCALE/KENO-VI input and allows the user to quickly run a simple criticality problem with SCALE/KENO-VI. The sections that follow Quickstart include a list of basic objectives at the beginning that identifies the goal of the section and the individual SCALE/KENO-VI features that are covered in detail in the sample problems in that section. Upon completion of the primer, a new user should be comfortable using GeeWiz to set up criticality problems in SCALE/KENO-VI. The primer provides a starting point for the criticality safety analyst who uses SCALE/KENO-VI. Complete descriptions are provided in the SCALE/KENO-VI manual. Although the primer is self-contained, it is intended as a companion volume to the SCALE/KENO-VI documentation. (The SCALE manual is provided on the SCALE installation DVD.) The primer provides specific examples of
Benchmarking Cloud Resources for HEP
Alef, M.; Cordeiro, C.; De Salvo, A.; Di Girolamo, A.; Field, L.; Giordano, D.; Guerri, M.; Schiavi, F. C.; Wiebalck, A.
2017-10-01
In a commercial cloud environment, exhaustive resource profiling is beneficial to cope with the intrinsic variability of the virtualised environment, allowing to promptly identify performance degradation. In the context of its commercial cloud initiatives, CERN has acquired extensive experience in benchmarking commercial cloud resources. Ultimately, this activity provides information on the actual delivered performance of invoiced resources. In this report we discuss the experience acquired and the results collected using several fast benchmark applications adopted by the HEP community. These benchmarks span from open-source benchmarks to specific user applications and synthetic benchmarks. The workflow put in place to collect and analyse performance metrics is also described.
International Nuclear Information System (INIS)
Schenter, R.E.; Oliver, B.M.; Farrar, H. IV
1987-01-01
Spectrum integrated cross sections for /sup 6/Li and /sup 10/B from five benchmark fast reactor neutron fields are compared with calculated values obtained using the ENDF/B-V Cross Section Files. The benchmark fields include the Coupled Fast Reactivity Measurements Facility (CFRMF) at the Idaho National Engineering Laboratory, the 10% Enriched U-235 Critical Assembly (BIG-10) at Los Alamos National Laboratory, the Sigma Sigma and Fission Cavity fields of the BR-1 reactor at CEN/SCK, and the Intermediate-Energy Standard Neutron Field (ISNF) at the National Bureau of Standards. Results from least square analyses using the FERRET computer code to obtain adjusted cross section values and their uncertainties are presented. Input to these calculations include the above five benchmark data sets. These analyses indicate a need for revision in the ENDF/B-V files for the /sup 10/B cross section for energies above 50 keV
International Nuclear Information System (INIS)
Schenter, R.E.; Oliver, B.M.; Farrar, H. IV.
1986-06-01
Spectrum integrated cross sections for 6 Li and 10 B from five benchmark fast reactor neutron fields are compared with calculated values obtained using the ENDF/B-V Cross Section Files. The benchmark fields include the Coupled Fast Reactivity Measurements Facility (CFRMF) at the Idaho National Engineering Laboratory, the 10% Enriched U-235 Critical Assembly (BIG-10) at Los Alamos National Laboratory, the Sigma-Sigma and Fission Cavity fields of the BR-1 reactor at CEN/SCK, and the Intermediate Energy Standard Neutron Field (ISNF) at the National Bureau of Standards. Results from least square analyses using the FERRET computer code to obtain adjusted cross section values and their uncertainties are presented. Input to these calculations include the above five benchmark data sets. These analyses indicate a need for revision in the ENDF/B-V files for the 10 B and 6 Li cross sections for energies above 50 keV
Torabi, Korosh; Corti, David S
2013-10-17
In the present paper, we develop a method to calculate the rate of homogeneous bubble nucleation within a superheated L-J liquid based on the (n,v) equilibrium embryo free energy surface introduced in the first paper (DOI: 10.1021/jp404149n). We express the nucleation rate as the product of the concentration of critical nuclei within the metastable liquid phase and the relevant forward rate coefficient. We calculate the forward rate coefficient of the critical nuclei from their average lifetime as determined from MD simulations of a large number of embryo trajectories initiated from the transitional region of the metastable liquid configuration space. Therefore, the proposed rate coefficient does not rely on any predefined reaction coordinate. In our model, the critical nuclei belong to the region of the configuration space where the committor probability is about one-half, guaranteeing the dynamical relevance of the proposed embryos. One novel characteristic of our approach is that we define a limit for the configuration space of the equilibrium metastable phase and do not include the configurations that have zero committor probability in the nucleation free energy surface. Furthermore, in order to take into account the transitional degrees of freedom of the critical nuclei, we develop a simulation-based approach for rigorously mapping the free energy of the (n,v) equilibrium embryos to the concentration of the critical nuclei within the bulk metastable liquid phase.
A Benchmarking System for Domestic Water Use
Directory of Open Access Journals (Sweden)
Dexter V. L. Hunt
2014-05-01
Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.
Validation of AMPX-KENO code for criticality analysis under various moderator density condition
International Nuclear Information System (INIS)
Ahn, Joon Gi; Hwang, Hae Ryang; Kim, Hyeong Heon; Lee, Seong Hee
1992-01-01
Nuclear criticality safety analysis shall be performed for the storage and handling facilities of the fissionable materials and the calculational method used to determine the effective multiplication factor also shall be validated by comparison with proper experimental data. The benchmark calculations were performed for the criticality analysis of new fuel storage facility using AMPX-KENO computer code system. The reference of the benchmark calculations are the critical experiments performed by the Nuclear Safety Department of the French Atomic Energy Commission to study the problems raised by the accidental sprinkling of a mist into a fuel storage. The bias and statistical uncertainties of the calculational method that will be applied in the criticality analysis of new fuel storage facility were also evaluated
Energy Technology Data Exchange (ETDEWEB)
Garcia, T.; Angeles, A.; Flores C, J., E-mail: teodoro.garcia@inin.gob.mx [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)
2013-10-15
In this work the conditions of nuclear safety were determined as much in normal conditions as in the accident event of the nuclear fuel warehouse of the reactor TRIGA Mark III of the Instituto Nacional de Investigaciones Nucleares (ININ). The warehouse contains standard fuel elements Leu - 8.5/20, a control rod with follower of standard fuel type Leu - 8.5/20, fuel elements Leu - 30/20, and the reactor fuel Sur-100. To check the subcritical state of the warehouse the effective multiplication factor (keff) was calculated. The keff calculation was carried out with the code MCNPX. (Author)
Benchmarking Cloud Storage Systems
Wang, Xing
2014-01-01
With the rise of cloud computing, many cloud storage systems like Dropbox, Google Drive and Mega have been built to provide decentralized and reliable file storage. It is thus of prime importance to know their features, performance, and the best way to make use of them. In this context, we introduce BenchCloud, a tool designed as part of this thesis to conveniently and efficiently benchmark any cloud storage system. First, we provide a study of six commonly-used cloud storage systems to ident...
Benchmarking Danish Industries
DEFF Research Database (Denmark)
Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette
2003-01-01
compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless......This report is based on the survey "Industrial Companies in Denmark - Today and Tomorrow',section IV: Supply Chain Management - Practices and Performance, question number 4.9 onperformance assessment. To our knowledge, this survey is unique, as we have not been able to findresults from any...
International Nuclear Information System (INIS)
Cong Haoxi; Li Qingmin; Xing Jinyuan; Li Jinsong; Chen Qiang
2015-01-01
The prompt extinction of the secondary arc is critical to the single-phase reclosing of AC transmission lines, including half-wavelength power transmission lines. In this paper, a low-voltage physical experimental platform was established and the motion process of the secondary arc was recorded by a high-speed camera. It was found that the arcing time of the secondary arc rendered a close relationship with its arc length. Through the input and output power energy analysis of the secondary arc, a new critical length criterion for the arcing time was proposed. The arc chain model was then adopted to calculate the arcing time with both the traditional and the proposed critical length criteria, and the simulation results were compared with the experimental data. The study showed that the arcing time calculated from the new critical length criterion gave more accurate results, which can provide a reliable criterion in term of arcing time for modeling and simulation of the secondary arc related with power transmission lines. (paper)
SINBAD: Shielding integral benchmark archive and database
International Nuclear Information System (INIS)
Hunter, H.T.; Ingersoll, D.T.; Roussin, R.W.
1996-01-01
SINBAD is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily retrieve and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity
Directory of Open Access Journals (Sweden)
H. Groessing
2015-02-01
Full Text Available A benchmark study for permeability measurement is presented. In the past studies of other research groups which focused on the reproducibility of 1D-permeability measurements showed high standard deviations of the gained permeability values (25%, even though a defined test rig with required specifications was used. Within this study, the reproducibility of capacitive in-plane permeability testing system measurements was benchmarked by comparing results of two research sites using this technology. The reproducibility was compared by using a glass fibre woven textile and carbon fibre non crimped fabric (NCF. These two material types were taken into consideration due to the different electrical properties of glass and carbon with respect to dielectric capacitive sensors of the permeability measurement systems. In order to determine the unsaturated permeability characteristics as function of fibre volume content the measurements were executed at three different fibre volume contents including five repetitions. It was found that the stability and reproducibility of the presentedin-plane permeability measurement system is very good in the case of the glass fibre woven textiles. This is true for the comparison of the repetition measurements as well as for the comparison between the two different permeameters. These positive results were confirmed by a comparison to permeability values of the same textile gained with an older generation permeameter applying the same measurement technology. Also it was shown, that a correct determination of the grammage and the material density are crucial for correct correlation of measured permeability values and fibre volume contents.
Benchmark physics tests in the metallic-fuelled assembly ZPPR-15
International Nuclear Information System (INIS)
McFarlane, H.F.; Brumbach, S.B.; Carpenter, S.G.; Collins, P.J.
1987-01-01
Results of the first benchmark physics tests of a metallic-fueled, demonstration-size, liquid metal reactor are reported. A simple, two-zone, cylindrical conventional assembly was built with three distinctly different compositions to represent the stages of the Integral Fast Reactor fuel cycle. Experiments included criticality, control, power distribution, reaction rate ratios, reactivity coefficients, shielding, kinetics and spectrum. Analysis was done with 3-D nodal diffusion calculations and ENDFIB-V.2 cross sections. Predictions of the ZPPR-15 reactor physics parameters agreed sufficiently well with the measured values to justify confidence in design analyses for metallic-fueled LMRs
International Nuclear Information System (INIS)
Emptaz, A.; Prevot, N.; Dubois, F.; Mahul, P.; Mariat, G.; Jospe, R.; Auboyer, C.; Cuilleron, M.
2005-01-01
Introduction: Acute acalculous cholecystitis (AAC) is a serious disease, difficult to diagnose in critically ill patients. The aim of the study was to evaluate the diagnostic performances of abdominal ultrasonography (US) and morphine-augmented cholescintigraphy (MC) and to improve diagnostic strategy in patients of intensive care unit (ICU) with suspected AA C. Methods: We retrospectively studied 82 consecutive ICU patients with suspected AA C. US was positive if the triad of gallbladder distension, gallbladder wall thickening and sludge was found. MC was positive if the gallbladder remained non-visualized after morphine injection. In a second time, other scintigraphic criteria of interpretation were tested, according to the visualization of the gallbladder before or after morphine administration. Treatment was decided on the basis of clinical, laboratory and imaging data. Results: The diagnosis of AAC was retained in 34 patients. US and MC had respectively for the diagnosis of AAC a sensitivity of 20.6 and 70.6%, and a specificity of 95.8 and 100%. Interpreting the MC as positive if the gallbladder remains non-visualized after morphine, as negative if it appears before, and as non-conclusive if visualized after, makes it possible to define respectively patients with high probability (100%), with low probability (7.5%) or with intermediate probability (39%) of AAC. Conclusions: MC is better than US for diagnosing AAC in critically ill patients, having in particular excellent specificity using the classical criteria of interpretation. MC must be thus performed in patients at risk for AAC, determined with clinical, laboratory and eventually echographic findings. To decrease false negative rate of MC, a probability categorical classification is proposed to improve patients' care. (author)
Computational shielding benchmarks
International Nuclear Information System (INIS)
The American Nuclear Society Standards Committee 6.2.1 is engaged in the documentation of radiation transport problems and their solutions. The primary objective of this effort is to test computational methods used within the international shielding community. Dissemination of benchmarks will, it is hoped, accomplish several goals: (1) Focus attention on problems whose solutions represent state-of-the-art methodology for representative transport problems of generic interest; (2) Specification of standard problems makes comparisons of alternate computational methods, including use of approximate vs. ''exact'' computer codes, more meaningful; (3) Comparison with experimental data may suggest improvements in computer codes and/or associated data sets; (4) Test reliability of new methods as they are introduced for the solution of specific problems; (5) Verify user ability to apply a given computational method; and (6) Verify status of a computer program being converted for use on a different computer (e.g., CDC vs IBM) or facility
Benchmarking foreign electronics technologies
Energy Technology Data Exchange (ETDEWEB)
Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.
1994-12-01
This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.
ENDF/B-V, LIB-V, and the CSEWG benchmarks
International Nuclear Information System (INIS)
Kidman, R.B.
1981-08-01
A 70-group library, LIB-V, generated with the NJOY processing code from ENDF/B-V, is tested on most of the Cross Section Evaluation Working Group (CSEWG) fast reactor benchmarks. Every experimental measurement reported in the benchmark specifications is compared to both diffusion theory and transport theory calculations. Several comparisons with prior benchmark calculations attempt to assess the effects of data and code improvements
H.B. Robinson-2 pressure vessel benchmark
Energy Technology Data Exchange (ETDEWEB)
Remec, I.; Kam, F.B.K.
1998-02-01
The H. B. Robinson Unit 2 Pressure Vessel Benchmark (HBR-2 benchmark) is described and analyzed in this report. Analysis of the HBR-2 benchmark can be used as partial fulfillment of the requirements for the qualification of the methodology for calculating neutron fluence in pressure vessels, as required by the U.S. Nuclear Regulatory Commission Regulatory Guide DG-1053, Calculational and Dosimetry Methods for Determining Pressure Vessel Neutron Fluence. Section 1 of this report describes the HBR-2 benchmark and provides all the dimensions, material compositions, and neutron source data necessary for the analysis. The measured quantities, to be compared with the calculated values, are the specific activities at the end of fuel cycle 9. The characteristic feature of the HBR-2 benchmark is that it provides measurements on both sides of the pressure vessel: in the surveillance capsule attached to the thermal shield and in the reactor cavity. In section 2, the analysis of the HBR-2 benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed with three multigroup libraries based on ENDF/B-VI: BUGLE-93, SAILOR-95 and BUGLE-96. The average ratio of the calculated-to-measured specific activities (C/M) for the six dosimeters in the surveillance capsule was 0.90 {+-} 0.04 for all three libraries. The average C/Ms for the cavity dosimeters (without neptunium dosimeter) were 0.89 {+-} 0.10, 0.91 {+-} 0.10, and 0.90 {+-} 0.09 for the BUGLE-93, SAILOR-95 and BUGLE-96 libraries, respectively. It is expected that the agreement of the calculations with the measurements, similar to the agreement obtained in this research, should typically be observed when the discrete-ordinates method and ENDF/B-VI libraries are used for the HBR-2 benchmark analysis.
Stoneburner, Samuel J.; Shen, Jun; Ajala, Adeayo O.; Piecuch, Piotr; Truhlar, Donald G.; Gagliardi, Laura
2017-10-01
Singlet-triplet gaps in diradical organic π-systems are of interest in many applications. In this study, we calculate them in a series of molecules, including cyclobutadiene and its derivatives and cyclopentadienyl cation, by using correlated participating orbitals within the complete active space (CAS) and restricted active space (RAS) self-consistent field frameworks, followed by second-order perturbation theory (CASPT2 and RASPT2). These calculations are evaluated by comparison with the results of doubly electron-attached (DEA) equation-of-motion (EOM) coupled-cluster (CC) calculations with up to 4-particle-2-hole (4p-2h) excitations. We find active spaces that can accurately reproduce the DEA-EOMCC(4p-2h) data while being small enough to be applicable to larger organic diradicals.
International Nuclear Information System (INIS)
Poullot, G.; Dumont, V.; Anno, J.; Cousinou, P.; Grivot, P.; Girault, E.; Fouillaud, P.; Barbry, F.
2003-01-01
The group ' International Criticality Safety Evaluation Benchmark evaluation project ' (I.C.S.B.E.P.) has for aim to supply to the international community experiments of benchmarks criticality, of certified quality, used to guarantee the qualification of criticality calculation codes. Have been defined: a structure of experiments classification, a format of standard presentation, a structure of work with evaluation, internal and external checks, presentation in plenary session. After favourable opinion of the work group, the synthesis document called evaluation is integrated to the general report I.C.S.B.E.P. (N.C.)
Ma, Zhichao; Zhao, Hongwei; Du, Xijie; Zhou, Mingxing; Ma, Xiaoxi; Liu, Changyi; Ren, Luquan
2018-03-01
This paper proposes a correction method to accurately evaluate the nanoindentation load-depth (P-h) curve of MEMS double clamped micro bridge structures. Critical elastic and plastic deflections of the bent bridge are extracted from the overall elastic-plastic deflection, respectively. Through subtracting the elastic-plastic deflection of the micro bridge from the total displacement of the Berkovich indenter's tip, the effect of constraint condition (double clamped) on the P-h curve of micro bridge is corrected. Nanoindentation P-h curves of routine and micro bridge C11000 Cu specimens are respectively obtained and compared with each other through both finite element analysis and experiments. Meanwhile, cross-sectional profiles along the symmetry axis of local indentation locations respectively obtained from the nodal deformations and scanned images of routine and micro bridge specimens are also compared and explained. Furthermore, a theoretical model is proposed to analyze the effect of the equivalent flow area induced by the elastic-plastic deflection on maximum indentation depth, the corrected values of Young's modulus, maximum and residual depths of micro bridge specimens are essentially in agreement with that of routine fixed specimens.
NASA Software Engineering Benchmarking Study
Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.
2013-01-01
was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5
A PWR Thorium Pin Cell Burnup Benchmark
Energy Technology Data Exchange (ETDEWEB)
Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.
2000-05-01
As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.
Burn-up TRIGA Mark II benchmark experiment
International Nuclear Information System (INIS)
Persic, A.; Ravnik, M.; Zagar, T.
1998-01-01
Different reactor codes are used for calculations of reactor parameters. The accuracy of the programs is tested through comparison of the calculated values with the experimental results. Well-defined and accurately measured benchmarks are required. The experimental results of reactivity measurements, fuel element reactivity worth distribution and fuel-up measurements are presented in this paper. The experiments were performed with partly burnt reactor core. The experimental conditions were well defined, so that the results can be used as a burn-up benchmark test case for a TRIGA Mark II reactor calculations.(author)
International Nuclear Information System (INIS)
Kikuchi, Yasuyuki; Hasegawa, Akira; Takano, Hideki; Kamei, Takanobu; Hojuyama, Takeshi; Sasaki, Makoto; Seki, Yuji; Zukeran, Atsushi; Otake, Iwao.
1982-02-01
Various benchmark tests were made on JENDL-1. At the first stage, various core center characteristics were tested for many critical assemblies with one-dimensional model. At the second stage, applicability of JENDL-1 was further tested to more sophisticated problems for MOZART and ZPPR-3 assemblies with two-dimensional model. It was proved that JENDL-1 predicted various quantities of fast reactors satisfactorily as a whole. However, the following problems were pointed out: 1) There exists discrepancy of 0.9% in the k sub(eff)-values between the Pu- and U-cores. 2) The fission rate ratio of 239 Pu to 235 U is underestimated by 3%. 3) The Doppler reactivity coefficients are overestimated by about 10%. 4) The control rod worths are underestimated by 4%. 5) The fission rates of 235 U and 239 Pu are underestimated considerably in the outer core and radial blanket regions. 6) The negative sodium void reactivities are overestimated, when the sodium is removed from the outer core. As a whole, most of problems of JENDL-1 seem to be related with the neutron leakage and the neutron spectrum. It was found through the further study that most of these problems came from too small diffusion coefficients and too large elastic removal cross sections above 100 keV, which might be probably caused by overestimation of the total and elastic scattering cross sections for structural materials in the unresolved resonance region up to several MeV. (author)
Staff Association
2017-01-01
On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...
EPA's Benchmark Dose Modeling Software
The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...
Energy Technology Data Exchange (ETDEWEB)
Goluoglu, S.
2001-01-11
Transportation of low-enriched uranium (LEU) and mixed-oxide (MOX) assemblies to and within the VVER-1000-type Balakovo Nuclear Power Plant is investigated. Effective multiplication factors for fresh fuel assemblies on the railroad platform, fresh fuel assemblies in the fuel transportation vehicle, and fresh fuel assemblies in the spent fuel storage pool are calculated. If there is no absorber between the units, the configurations with all MOX assemblies result in higher effective multiplication factors than the configurations with all LEU assemblies when the system is dry. When the system is flooded, the configurations with all LEU assemblies result in higher effective multiplication factors. For normal operating conditions, effective multiplication factors for all configurations are below the presumed upper subcritical limit of 0.95. For an accident condition of a fully loaded fuel transportation vehicle that is flooded with low-density water (possibly from a fire suppression system), the presumed upper subcritical limit is exceeded by configurations containing LEU assemblies.
Benchmarking and Sustainable Transport Policy
DEFF Research Database (Denmark)
Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy
2004-01-01
is generally not advised. Several other ways in which benchmarking and policy can support one another are identified in the analysis. This leads to a range of recommended initiatives to exploit the benefits of benchmarking in transport while avoiding some of the lurking pitfalls and dead ends......Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for sustainable transport. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly sustainable transport...
Present Status and Extensions of the Monte Carlo Performance Benchmark
Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.
2014-06-01
The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.
BENCHMARK EVALUATION OF THE INITIAL ISOTHERMAL PHYSICS MEASUREMENTS AT THE FAST FLUX TEST FACILITY
Energy Technology Data Exchange (ETDEWEB)
John Darrell Bess
2010-05-01
The benchmark evaluation of the initial isothermal physics tests performed at the Fast Flux Test Facility, in support of Fuel Cycle Research and Development and Generation-IV activities at the Idaho National Laboratory, has been completed. The evaluation was performed using MCNP5 with ENDF/B-VII.0 nuclear data libraries and according to guidelines provided for inclusion in the International Reactor Physics Experiment Evaluation Project Handbook. Results provided include evaluation of the initial fully-loaded core critical, two neutron spectra measurements near the axial core center, 32 reactivity effects measurements (21 control rod worths, two control rod bank worths, six differential control rod worths, two shutdown margins, and one excess reactivity), isothermal temperature coefficient, and low-energy electron and gamma spectra measurements at the core center. All measurements were performed at 400 ºF. There was good agreement between the calculated and benchmark values for the fully-loaded core critical eigenvalue, reactivity effects measurements, and isothermal temperature coefficient. General agreement between benchmark experiment measurements and calculated spectra for neutrons and low-energy gammas at the core midplane exists, but calculations of the neutron spectra below the core and the low-energy gamma spectra at core midplane did not agree well. Homogenization of core components may have had a significant impact upon computational assessment of these effects. Future work includes development of a fully-heterogeneous model for comprehensive evaluation. The reactor physics measurement data can be used in nuclear data adjustment and validation of computational methods for advanced fuel cycle and nuclear reactor systems using Liquid Metal Fast Reactor technology.
Detailed Burnup Calculations for Testing Nuclear Data
Leszczynski, F.
2005-05-01
-section data for burnup calculations, using some of the main available evaluated nuclear data files (ENDF-B-VI-Rel.8, JEFF-3.0, JENDL-3.3), on an isotope-by-isotope basis as much as possible. The selected experimental burnup benchmarks are reference cases for LWR and HWR reactors, with analysis of isotopic composition as a function of burnup. For LWR (H2O-moderated uranium oxide lattices) four benchmarks are included: ATM-104 NEA Burnup credit criticality benchmark; Yankee-Rowe Core V; H.B.Robinson Unit 2 and Turkey Point Unit 3. For HWR (D2O-moderated uranium oxide cluster lattices), three benchmarks were selected: NPD-19-rod Fuel Clusters; Pickering-28-rod Fuel Clusters; and Bruce-37-rod Fuel Clusters. The isotopes with experimental concentration data included in these benchmarks are: Se-79, Sr90, Tc99, Ru106, Sn126, Sb125,1129, Cs133-137, Nd143, 145, Sm149-150, 152, Eul53-155, U234-235, 238, Np237, Pu238-242, Am241-243, and Cm242-248. Results and analysis of differences between calculated and measured absolute and/or relative concentrations of these isotopes for the seven benchmarks are included in this work.
International Nuclear Information System (INIS)
Dos Santos, Adimir; Siqueira, Paulo de Tarso D.; Andrade e Silva, Graciete Simões; Grant, Carlos; Tarazaga, Ariel E.; Barberis, Claudia
2013-01-01
In year 2008 the Atomic Energy National Commission (CNEA) of Argentina, and the Brazilian Institute of Energetic and Nuclear Research (IPEN), under the frame of Nuclear Energy Argentine Brazilian Agreement (COBEN), among many others, included the project “Validation and Verification of Calculation Methods used for Research and Experimental Reactors . At this time, it was established that the validation was to be performed with models implemented in the deterministic codes HUEMUL and PUMA (cell and reactor codes) developed by CNEA and those ones implemented in MCNP by CNEA and IPEN. The necessary data for these validations would correspond to theoretical-experimental reference cases in the research reactor IPEN/MB-01 located in São Paulo, Brazil. The staff of the group Reactor and Nuclear Power Studies (SERC) of CNEA, from the argentine side, performed calculations with deterministic models (HUEMUL-PUMA) and probabilistic methods (MCNP) modeling a great number of physical situations of de reactor, which previously have been studied and modeled by members of the Center of Nuclear Engineering of the IPEN, whose results were extensively provided to CNEA. In this paper results of comparison of calculated and experimental results for temperature coefficients, kinetic parameters and fission rates spatial distributions are shown. (author)
Benchmarking biofuels; Biobrandstoffen benchmarken
Energy Technology Data Exchange (ETDEWEB)
Croezen, H.; Kampman, B.; Bergsma, G.
2012-03-15
A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.
Virtual machine performance benchmarking.
Langer, Steve G; French, Todd
2011-10-01
The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising.
International Nuclear Information System (INIS)
Bowman, S.M.; Suto, T.
1996-10-01
ANSI/ANS 8.1 requires that calculational methods for away-from- reactor (AFR) criticality safety analyses be validated against experiment. This report summarizes part of the ongoing effort to benchmark AFR criticality analysis methods using selected critical configurations from commercial PWRs. Codes and data in the SCALE-4 code system were used. This volume documents the SCALE system analysis of one reactor critical configuration for North Anna Unit 1 Cycle 5. The KENO V.a criticality calculations for the North Anna 1 Cycle 5 beginning-of-cycle model yielded a value for k eff of 1. 0040±0.0005
INTEGRAL BENCHMARK DATA FOR NUCLEAR DATA TESTING THROUGH THE ICSBEP AND THE NEWLY ORGANIZED IRPHEP
Energy Technology Data Exchange (ETDEWEB)
J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori
2007-04-01
The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) was last reported in a nuclear data conference at the International Conference on Nuclear Data for Science and Technology, ND-2004, in Santa Fe, New Mexico. Since that time the number and type of integral benchmarks have increased significantly. Included in the ICSBEP Handbook are criticality-alarm / shielding and fundamental physic benchmarks in addition to the traditional critical / subcritical benchmark data. Since ND 2004, a reactor physics counterpart to the ICSBEP, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. The IRPhEP is patterned after the ICSBEP, but focuses on other integral measurements, such as buckling, spectral characteristics, reactivity effects, reactivity coefficients, kinetics measurements, reaction-rate and power distributions, nuclide compositions, and other miscellaneous-type measurements in addition to the critical configuration. The status of these two projects is discussed and selected benchmarks highlighted in this paper.
Benchmarking of energy time series
Energy Technology Data Exchange (ETDEWEB)
Williamson, M.A.
1990-04-01
Benchmarking consists of the adjustment of time series data from one source in order to achieve agreement with similar data from a second source. The data from the latter source are referred to as the benchmark(s), and often differ in that they are observed at a lower frequency, represent a higher level of temporal aggregation, and/or are considered to be of greater accuracy. This report provides an extensive survey of benchmarking procedures which have appeared in the statistical literature, and reviews specific benchmarking procedures currently used by the Energy Information Administration (EIA). The literature survey includes a technical summary of the major benchmarking methods and their statistical properties. Factors influencing the choice and application of particular techniques are described and the impact of benchmark accuracy is discussed. EIA applications and procedures are reviewed and evaluated for residential natural gas deliveries series and coal production series. It is found that the current method of adjusting the natural gas series is consistent with the behavior of the series and the methods used in obtaining the initial data. As a result, no change is recommended. For the coal production series, a staged approach based on a first differencing technique is recommended over the current procedure. A comparison of the adjustments produced by the two methods is made for the 1987 Indiana coal production series. 32 refs., 5 figs., 1 tab.
Directory of Open Access Journals (Sweden)
Rocchi Federico
2017-01-01
Full Text Available Gadolinium odd isotopes cross sections are crucial in assessing the neutronic performance and safety features of a light water reactor (LWR core. Accurate evaluations of the neutron capture behavior of gadolinium burnable poisons are necessary for a precise estimation of the economic gain due to the extension of fuel life, the residual reactivity penalty at the end of life, and the reactivity peak for partially spent fuel for the criticality safety analysis of Spent Fuel Pools. Nevertheless, present gadolinium odd isotopes neutron cross sections are somehow dated and poorly investigated in the high sensitivity thermal energy region and are available with an uncertainty which is too high in comparison to the present day typical industrial standards and needs. This article shows how the most recent gadolinium cross sections evaluations appear inadequate to provide accurate criticality calculations for a system with gadolinium fuel pins. In this article, a sensitivity and uncertainty analysis (S/U has been performed to investigate the effect of gadolinium odd isotopes nuclear cross sections data on the multiplication factor of some LWR fuel assemblies. The results have shown the importance of gadolinium odd isotopes in the criticality evaluation, and they confirmed the need of a re-evaluation of the neutron capture cross sections by means of new experimental measurements to be carried out at the n_TOF facility at CERN.
California commercial building energy benchmarking
Energy Technology Data Exchange (ETDEWEB)
Kinney, Satkartar; Piette, Mary Ann
2003-07-01
Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the
HELIOS2: Benchmarking against experiments for hexagonal and square lattices
International Nuclear Information System (INIS)
Simeonov, T.
2009-01-01
HELIOS2, is a 2D transport theory program for fuel burnup and gamma-flux calculation. It solves the neutron and gamma transport equations in a general, two-dimensional geometry bounded by a polygon of straight lines. The applied transport solver may be chosen between: The Method of Collision Probabilities (CP) and The Method of Characteristics(MoC). The former is well known for its successful application for preparation of cross section data banks for 3D simulators for all types lattices for WWERs, PWRs, BWRs, AGRs, RBMK and CANDU reactors. The later, MoC, helps in the areas where the requirements of CP for computational power become too large of practical application. The application of HELIOS2 and The Method of Characteristics for some large from calculation point of view benchmarks is presented in this paper. The analysis combines comparisons to measured data from the Hungarian ZR-6 reactor and JAERI facility of Tank type Critical Assembly (TCA) to verify and validate HELIOS2 and MOC for WWER assembly imitators; configurations with different absorber types- ZrB 2 , B 4 C, Eu 2 O 3 and Gd 2 O 3 ; and critical configurations with stainless steel in the reflector. Core eigenvalues and reaction rates are compared. With the account for the uncertainties the results are generally excellent. Special place in this paper is given to the effect of Iron-made radial reflector. Comparisons to measurements from TIC and TCA for stainless steel and Iron reflected cores are presented. The calculated by HELIOS-2 reactivity effect is in very good agreement with the measurements. (author)
International Nuclear Information System (INIS)
Cruzate, J.A.; Carelli, J.L.
2011-01-01
This work presents a theoretical re-evaluation of a set of original experiments included in the 2009 issue of the International Handbook of Evaluated Criticality Safety Benchmark Experiments, as “Concrete Reflected Cylinders of Highly Enriched Solutions of Uranyl Nitrate” (identification number: HEU-SOL-THERM- 002) [4]. The present evaluation has been made according to benchmark specifications [4], and added data taken out of the original published report [3], but applying a different approach, resulting in a more realistic calculation model. In addition, calculations have been made using the latest version of MCNPX Monte Carlo code, combined with an updated set of cross section data, the continuous-energy ENDF/B-VI library. This has resulted in a comprehensive model for the given experimental situation. Uncertainties analysis has been made based on the evaluation of experimental data presented in the HEU-SOLTHERM-002 report. Resulting calculations with the present improved physical model have been able to reproduce the criticality of configurations within 0.5%, in good agreement with experimental data. Results obtained in the analysis of uncertainties are in general agreement with those at HEU-SOL-THERM-002 benchmark document. Qualitative results from analyses made in the present work can be extended to similar fissile systems: well moderated units of 235 U solutions, reflected with concrete from all directions. Results have confirmed that neutron absorbers, even as impurities, must be taken into account in calculations if at least approximate proportions were known. (authors)
Water Level Superseded Benchmark Sheets
National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....
Benchmark simulation models, quo vadis?
DEFF Research Database (Denmark)
Jeppsson, U.; Alex, J; Batstone, D. J.
2013-01-01
As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together...... to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal...... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work...
Benchmarking Complications Associated with Esophagectomy
Low, Donald E.; Kuppusamy, Madhan Kumar; Alderson, Derek; Cecconello, Ivan; Chang, Andrew C.; Darling, Gail; Davies, Andrew; D'journo, Xavier Benoit; Gisbertz, Suzanne S.; Griffin, S. Michael; Hardwick, Richard; Hoelscher, Arnulf; Hofstetter, Wayne; Jobe, Blair; Kitagawa, Yuko; Law, Simon; Mariette, Christophe; Maynard, Nick; Morse, Christopher R.; Nafteux, Philippe; Pera, Manuel; Pramesh, C. S.; Puig, Sonia; Reynolds, John V.; Schroeder, Wolfgang; Smithers, Mark; Wijnhoven, B. P. L.
2017-01-01
Utilizing a standardized dataset with specific definitions to prospectively collect international data to provide a benchmark for complications and outcomes associated with esophagectomy. Outcome reporting in oncologic surgery has suffered from the lack of a standardized system for reporting
Integral Benchmark Data for Nuclear Data Testing Through the ICSBEP & IRPhEP
Briggs, J. B.; Bess, J. D.; Gulliford, J.
2014-04-01
The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the nuclear data community at ND2007. Since ND2007, integral benchmark data that are available for nuclear data testing have increased significantly. The status of the ICSBEP and the IRPhEP is discussed and selected benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2007 are highlighted.
Integral Benchmark Data for Nuclear Data Testing Through the ICSBEP & IRPhEP
Energy Technology Data Exchange (ETDEWEB)
J. Blair Briggs; John D. Bess; Jim Gulliford; Ian Hill
2013-10-01
The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the nuclear data community at ND2007. Since ND2007, integral benchmark data that are available for nuclear data testing have increased significantly. The status of the ICSBEP and the IRPhEP is discussed and selected benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2007 are highlighted.
Experiments for IFR fuel criticality in ZPPR-21
International Nuclear Information System (INIS)
Olsen, D.N.; Collins, P.J.; Carpenter, S.G.
1991-01-01
A series of benchmark measurements was made in ZPPR-21 to validate criticality calculations for fuel processing operations for Argonne's Integral Fast Reactor program. Six different mixtures of Pu/U/Zr fuel with a graphite reflector were built and criticality was determined by period measurements. The assemblies were isolated from room return neutrons by a lithium hydride shield. Analysis was done using a fully-detailed model with the VIM Monte Carlo code and ENDF/B-V.2 data. Sensitivity analysis was used to validate the measurements against other benchmark data. A simple RZ model was defined and used with the KENO code. Corrections to the RZ model were provided by the VIM calculations with low statistical uncertainty. (Author)
Experiments for IFR fuel criticality in ZPPR-21
International Nuclear Information System (INIS)
Olsen, D.N.; Collins, P.J.; Carpenter, S.G.
1991-01-01
A series of benchmark measurements was made in ZPPR-21 to validate criticality calculations for fuel operations in Argonne's Integral Fast Reactor. Six different mixtures of Pu/U/Zr fuel with a graphite reflector were built and criticality was determined by period measurements. The assemblies were isolated from room return problems by a lithium hydride shield. Analysis was done using a fully-detailed model with the VIM Monte Carlo code and ENDF/B-V.2 data. Sensitivity analysis was used to validate the measurements against other benchmark data. A simple RZ model was defined the used with the KENO code. Corrections to the RZ model were provided by the VIM calculations with low statistical uncertainty. 7 refs., 5 figs., 5 tabs
SPOC Benchmark Case: SNRE Model
Energy Technology Data Exchange (ETDEWEB)
Vishal Patel; Michael Eades; Claude Russel Joyner II
2016-02-01
The Small Nuclear Rocket Engine (SNRE) was modeled in the Center for Space Nuclear Research’s (CSNR) Space Propulsion Optimization Code (SPOC). SPOC aims to create nuclear thermal propulsion (NTP) geometries quickly to perform parametric studies on design spaces of historic and new NTP designs. The SNRE geometry was modeled in SPOC and a critical core with a reasonable amount of criticality margin was found. The fuel, tie-tubes, reflector, and control drum masses were predicted rather well. These are all very important for neutronics calculations so the active reactor geometries created with SPOC can continue to be trusted. Thermal calculations of the average and hot fuel channels agreed very well. The specific impulse calculations used historically and in SPOC disagree so mass flow rates and impulses differed. Modeling peripheral and power balance components that do not affect nuclear characteristics of the core is not a feature of SPOC and as such, these components should continue to be designed using other tools. A full paper detailing the available SNRE data and comparisons with SPOC outputs will be submitted as a follow-up to this abstract.
Analysis of Benchmark 2 results
International Nuclear Information System (INIS)
Bacha, F.; Lefievre, B.; Maillard, J.; Silva, J.
1994-01-01
The code GEANT315 has been compared to different codes in two benchmarks. We analyze its performances through our results, especially in the thick target case. In spite of gaps in nucleus-nucleus interaction theories at intermediate energies, benchmarks allow possible improvements of physical models used in our codes. Thereafter, a scheme of radioactive waste burning system is studied. (authors). 4 refs., 7 figs., 1 tab
Current Reactor Physics Benchmark Activities at the Idaho National Laboratory
Energy Technology Data Exchange (ETDEWEB)
John D. Bess; Margaret A. Marshall; Mackenzie L. Gorham; Joseph Christensen; James C. Turnbull; Kim Clark
2011-11-01
The International Reactor Physics Experiment Evaluation Project (IRPhEP) [1] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) [2] were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.
Research on computer systems benchmarking
Smith, Alan Jay (Principal Investigator)
1996-01-01
This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.
Neil, Amanda; Pfeffer, Sally; Burnett, Leslie
2013-01-01
This paper details the development of a new type of pathology laboratory productivity unit, the benchmarking complexity unit (BCU). The BCU provides a comparative index of laboratory efficiency, regardless of test mix. It also enables estimation of a measure of how much complex pathology a laboratory performs, and the identification of peer organisations for the purposes of comparison and benchmarking. The BCU is based on the theory that wage rates reflect productivity at the margin. A weighting factor for the ratio of medical to technical staff time was dynamically calculated based on actual participant site data. Given this weighting, a complexity value for each test, at each site, was calculated. The median complexity value (number of BCUs) for that test across all participating sites was taken as its complexity value for the Benchmarking in Pathology Program. The BCU allowed implementation of an unbiased comparison unit and test listing that was found to be a robust indicator of the relative complexity for each test. Employing the BCU data, a number of Key Performance Indicators (KPIs) were developed, including three that address comparative organisational complexity, analytical depth and performance efficiency, respectively. Peer groups were also established using the BCU combined with simple organisational and environmental metrics. The BCU has enabled productivity statistics to be compared between organisations. The BCU corrects for differences in test mix and workload complexity of different organisations and also allows for objective stratification into peer groups.
Benchmark tests for fast and thermal reactor applications
International Nuclear Information System (INIS)
Seki, Yuji
1984-01-01
Integral tests of JENDL-2 library for fast and thermal reactor applications are reviewed including relevant analyses of JUPITER experiments. Criticality and core center characteristics were tested with one-dimensional models for a total of 27 fast critical assemblies. More sofisticated problems such as reaction rate distributions, control rod worths and sodium void reactivities were tested using two-dimensional models for MOZART and ZPPR-3 assemblies. Main observations from the fast core benchmark tests are as follows. 1) The criticality is well predicted; the average C/E value is 0.999+-0.008 for uranium cores and 0.997+-0.005 for plutonium cores. 2) The calculation underpredicts the reaction rate ratio 239 Pusub(fis)/ 235 Usub(fis) by 3% and overpredicts 238 Usub(cap)/ 239 Pusub(fis) by 6%. The results are consistent with those of JUPITER analyses. 3) The reaction rate distributions in the cores of prototype size are well predicted within +-3%. In larger JUPITER cores, however, the C/E value increases with the radial distance from the core center up to 6% at the outer core edge. 4) The prediction of control rod worths is satisfactory; C/E values are within the range from 0.92 to 0.97 with no apparent dependence on 10 B enrichment and the number of control rods inserted. Spatial dependence of C/E is also observed in the JUPITER cores. 5) The sodium void reactivity is overpredicted by 30% to 50% to the positive side. 1) The criticality is well predicted, as is the same in the fast core tests; the average C/E is 0.997+-0.003. 2) The calculation overpredicts 238 Usub(fis)/ 235 Usub(fis) by 3% to 6%, which shows the same tendency as in the small and medium size fast assemblies. The 238 Usub(cap)/ 235 Usub(fis) ratio is well predicted in the thermal cores. The calculated reaction rate ratios of 232 Th deviate from the measurements by 10% to 15%. (author)
Benchmarking of SIMULATE-3 on engineering workstations
International Nuclear Information System (INIS)
Karlson, C.F.; Reed, M.L.; Webb, J.R.; Elzea, J.D.
1990-01-01
The nuclear fuel management department of Arizona Public Service Company (APS) has evaluated various computer platforms for a departmental engineering and business work-station local area network (LAN). Historically, centralized mainframe computer systems have been utilized for engineering calculations. Increasing usage and the resulting longer response times on the company mainframe system and the relative cost differential between a mainframe upgrade and workstation technology justified the examination of current workstations. A primary concern was the time necessary to turn around routine reactor physics reload and analysis calculations. Computers ranging from a Definicon 68020 processing board in an AT compatible personal computer up to an IBM 3090 mainframe were benchmarked. The SIMULATE-3 advanced nodal code was selected for benchmarking based on its extensive use in nuclear fuel management. SIMULATE-3 is used at APS for reload scoping, design verification, core follow, and providing predictions of reactor behavior under nominal conditions and planned reactor maneuvering, such as axial shape control during start-up and shutdown
Energy Technology Data Exchange (ETDEWEB)
John D. Bess; J. Blair Briggs; Jim Gulliford; Ian Hill
2014-10-01
The International Reactor Physics Experiment Evaluation Project (IRPhEP) is a widely recognized world class program. The work of the IRPhEP is documented in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Integral data from the IRPhEP Handbook is used by reactor safety and design, nuclear data, criticality safety, and analytical methods development specialists, worldwide, to perform necessary validations of their calculational techniques. The IRPhEP Handbook is among the most frequently quoted reference in the nuclear industry and is expected to be a valuable resource for future decades.
Benchmarking of Electricity Distribution Licensees Operating in Sri Lanka
Directory of Open Access Journals (Sweden)
K. T. M. U. Hemapala
2016-01-01
Full Text Available Electricity sector regulators are practicing benchmarking of distribution companies to regulate the allowed revenue. Mainly this is carried out based on the relative efficiency scores produced by frontier benchmarking techniques. Some of these techniques, for example, Corrected Ordinary Least Squares method and Stochastic Frontier Analysis, use econometric approach to estimate efficiency scores, while a method like Data Envelopment Analysis uses linear programming. Those relative efficiency scores are later used to calculate the efficiency factor (X-factor which is a component of the revenue control formula. In electricity distribution industry in Sri Lanka, the allowed revenue for a particular distribution licensee is calculated according to the allowed revenue control formula as specified in the tariff methodology of Public Utilities Commission of Sri Lanka. This control formula contains the X-factor as well, but its effect has not been considered yet; it just kept it zero, since there were no relative benchmarking studies carried out by the utility regulators to decide the actual value of X-factor. This paper focuses on producing a suitable benchmarking methodology by studying prominent benchmarking techniques used in international regulatory regime and by analyzing the applicability of them to Sri Lankan context, where only five Distribution Licensees are operating at present.
Energy Technology Data Exchange (ETDEWEB)
Flach, G.P. (ed.)
1990-12-01
FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss Of Coolant Accident (LOCA). A description of the code is given by Flach et al. (1990). This report provides benchmarking results for the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit (Smith et al., 1990a; 1990b). Individual constitutive relations are benchmarked in Sections 2 through 5 while in Sections 6 and 7 integral code benchmarking results are presented. An overall assessment of FLOWTRAN-TF for its intended use in computing the ECS power limit completes the document.
Spherical harmonic results for the 3D Kobayashi Benchmark suite
International Nuclear Information System (INIS)
Brown, P N; Chang, B; Hanebutte, U R
1999-01-01
Spherical harmonic solutions are presented for the Kobayashi benchmark suite. The results were obtained with Ardra, a scalable, parallel neutron transport code developed at Lawrence Livermore National Laboratory (LLNL). The calculations were performed on the IBM ASCI Blue-Pacific computer at LLNL
Orifici, Adrian C.; Krueger, Ronald
2010-01-01
With capabilities for simulating delamination growth in composite materials becoming available, the need for benchmarking and assessing these capabilities is critical. In this study, benchmark analyses were performed to assess the delamination propagation simulation capabilities of the VCCT implementations in Marc TM and MD NastranTM. Benchmark delamination growth results for Double Cantilever Beam, Single Leg Bending and End Notched Flexure specimens were generated using a numerical approach. This numerical approach was developed previously, and involves comparing results from a series of analyses at different delamination lengths to a single analysis with automatic crack propagation. Specimens were analyzed with three-dimensional and two-dimensional models, and compared with previous analyses using Abaqus . The results demonstrated that the VCCT implementation in Marc TM and MD Nastran(TradeMark) was capable of accurately replicating the benchmark delamination growth results and that the use of the numerical benchmarks offers advantages over benchmarking using experimental and analytical results.
Benchmark experiment on vanadium assembly with D-T neutrons. In-situ measurement
Energy Technology Data Exchange (ETDEWEB)
Maekawa, Fujio; Kasugai, Yoshimi; Konno, Chikara; Wada, Masayuki; Oyama, Yukio; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Murata, Isao; Kokooo; Takahashi, Akito
1998-03-01
Fusion neutronics benchmark experimental data on vanadium were obtained for neutrons in almost entire energies as well as secondary gamma-rays. Benchmark calculations for the experiment were performed to investigate validity of recent nuclear data files, i.e., JENDL Fusion File, FENDL/E-1.0 and EFF-3. (author)
Homogeneous fast reactor benchmark testing of CENDL-2 and ENDF/B-6
International Nuclear Information System (INIS)
Liu Guisheng
1995-11-01
How to choose correct weighting spectrum has been studied to produce multigroup constants for fast reactor benchmark calculations. A correct weighting option makes us obtain satisfying results of K eff and central reaction rate ratios for nine fast reactor benchmark testing of CENDL-2 and ENDF/B-6. (author). 8 refs, 2 figs, 4 tabs
Homogeneous fast reactor benchmark testing of CENDL-2 and ENDF/B-6
International Nuclear Information System (INIS)
Liu Guisheng
1995-01-01
How to choose correct weighting spectrum has been studied to produce multigroup constants for fast reactor benchmark calculations. A correct weighting option makes us obtain satisfying results of K eff and central reaction rate ratios for nine fast reactor benchmark testings of CENDL-2 and ENDF/B-6. (4 tabs., 2 figs.)
Benchmark of a Cubieboard cluster
Schnepf, M. J.; Gudu, D.; Rische, B.; Fischer, M.; Jung, C.; Hardt, M.
2015-12-01
We built a cluster of ARM-based Cubieboards2 which has a SATA interface to connect a harddrive. This cluster was set up as a storage system using Ceph and as a compute cluster for high energy physics analyses. To study the performance in these applications, we ran two benchmarks on this cluster. We also checked the energy efficiency of the cluster using the preseted benchmarks. Performance and energy efficency of our cluster were compared with a network-attached storage (NAS), and with a desktop PC.
Benchmark Analysis of Subcritical Noise Measurements on a Nickel-Reflected Plutonium Metal Sphere
Energy Technology Data Exchange (ETDEWEB)
John D. Bess; Jesson Hutchinson
2009-09-01
Subcritical experiments using californium source-driven noise analysis (CSDNA) and Feynman variance-to-mean methods were performed with an alpha-phase plutonium sphere reflected by nickel shells, up to a maximum thickness of 7.62 cm. Both methods provide means of determining the subcritical multiplication of a system containing nuclear material. A benchmark analysis of the experiments was performed for inclusion in the 2010 edition of the International Handbook of Evaluated Criticality Safety Benchmark Experiments. Benchmark models have been developed that represent these subcritical experiments. An analysis of the computed eigenvalues and the uncertainty in the experiment and methods was performed. The eigenvalues computed using the CSDNA method were very close to those calculated using MCNP5; however, computed eigenvalues are used in the analysis of the CSDNA method. Independent calculations using KENO-VI provided similar eigenvalues to those determined using the CSDNA method and MCNP5. A slight trend with increasing nickel-reflector thickness was seen when comparing MCNP5 and KENO-VI results. For the 1.27-cm-thick configuration the MCNP eigenvalue was approximately 300 pcm greater. The calculated KENO eigenvalue was about 300 pcm greater for the 7.62-cm-thick configuration. The calculated results were approximately the same for a 5-cm-thick shell. The eigenvalues determined using the Feynman method are up to approximately 2.5% lower than those determined using either the CSDNA method or the Monte Carlo codes. The uncertainty in the results from either method was not large enough to account for the bias between the two experimental methods. An ongoing investigation is being performed to assess what potential uncertainties and/or biases exist that have yet to be properly accounted for. The dominant uncertainty in the CSDNA analysis was the uncertainty in selecting a neutron cross-section library for performing the analysis of the data. The uncertainty in the
Calculations on Noncovalent Interactions and Databases of Benchmark Interaction Energies
Czech Academy of Sciences Publication Activity Database
Hobza, Pavel
2012-01-01
Roč. 45, č. 4 (2012), s. 663-672 ISSN 0001-4842 R&D Projects: GA ČR GBP208/12/G016 Grant - others:European Social Fund(XE) CZ.1.05/2.1.00/03.0058 Institutional research plan: CEZ:AV0Z40550506 Keywords : non-covalent interactions * covalent interactions * quantum chemical approach Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 20.833, year: 2012
Calculation of benchmarks with a shear beam model
Hendriks, M.A.N.; Boer, A.; Rots, J.G.; Ferreira, D.
2015-01-01
Fiber models for beam and shell elements allow for relatively rapid finite element analysis of concrete structures and structural elements. This project aims at the development of the formulation of such elements and a pilot implementation. Standard nonlinear fiber beam formulations do not account
IRPhEP-handbook, International Handbook of Evaluated Reactor Physics Benchmark Experiments
International Nuclear Information System (INIS)
Sartori, Enrico; Blair Briggs, J.
2008-01-01
1 - Description: The purpose of the International Reactor Physics Experiment Evaluation Project (IRPhEP) is to provide an extensively peer-reviewed set of reactor physics-related integral data that can be used by reactor designers and safety analysts to validate the analytical tools used to design next-generation reactors and establish the safety basis for operation of these reactors. This work of the IRPhEP is formally documented in the 'International Handbook of Evaluated Reactor Physics Benchmark Experiments,' a single source of verified and extensively peer-reviewed reactor physics benchmark measurements data. The IRPhE Handbook is available on DVD. You may request a DVD by completing the DVD Request Form available at: http://irphep.inl.gov/handbook/hbrequest.shtml The evaluation process entails the following steps: 1. Identify a comprehensive set of reactor physics experimental measurements data, 2. Evaluate the data and quantify overall uncertainties through various types of sensitivity analysis to the extent possible, verify the data by reviewing original and subsequently revised documentation, and by talking with the experimenters or individuals who are familiar with the experimental facility, 3. Compile the data into a standardized format, 4. Perform calculations of each experiment with standard reactor physics codes where it would add information, 5. Formally document the work into a single source of verified and peer reviewed reactor physics benchmark measurements data. The International Handbook of Evaluated Reactor Physics Benchmark Experiments contains reactor physics benchmark specifications that have been derived from experiments that were performed at various nuclear experimental facilities around the world. The benchmark specifications are intended for use by reactor physics personal to validate calculational techniques. The 2008 Edition of the International Handbook of Evaluated Reactor Physics Experiments contains data from 25 different
Cowdery, E.; Dietze, M.
2017-12-01
As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentration are highly variable and contain a considerable amount of uncertainty. Benchmarking model predictions against data are necessary to assess their ability to replicate observed patterns, but also to identify and evaluate the assumptions causing inter-model differences. We have implemented a novel benchmarking workflow as part of the Predictive Ecosystem Analyzer (PEcAn) that is automated, repeatable, and generalized to incorporate different sites and ecological models. Building on the recent Free-Air CO2 Enrichment Model Data Synthesis (FACE-MDS) project, we used observational data from the FACE experiments to test this flexible, extensible benchmarking approach aimed at providing repeatable tests of model process representation that can be performed quickly and frequently. Model performance assessments are often limited to traditional residual error analysis; however, this can result in a loss of critical information. Models that fail tests of relative measures of fit may still perform well under measures of absolute fit and mathematical similarity. This implies that models that are discounted as poor predictors of ecological productivity may still be capturing important patterns. Conversely, models that have been found to be good predictors of productivity may be hiding error in their sub-process that result in the right answers for the wrong reasons. Our suite of tests have not only highlighted process based sources of uncertainty in model productivity calculations, they have also quantified the patterns and scale of this error. Combining these findings with PEcAn's model sensitivity analysis and variance decomposition strengthen our ability to identify which processes
Application of an integrated PC-based neutronics code system to criticality safety
International Nuclear Information System (INIS)
Briggs, J.B.; Nigg, D.W.
1991-01-01
An integrated system of neutronics and radiation transport software suitable for operation in an IBM PC-class environment has been under development at the Idaho National Engineering Laboratory (INEL) for the past four years. Four modules within the system are particularly useful for criticality safety applications. Using the neutronics portion of the integrated code system, effective neutron multiplication values (k eff values) have been calculated for a variety of benchmark critical experiments for metal systems (Plutonium and Uranium), Aqueous Systems (Plutonium and Uranium) and LWR fuel rod arrays. A description of the codes and methods used in the analysis and the results of the benchmark critical experiments are presented in this paper. In general, excellent agreement was found between calculated and experimental results. (Author)
Neutron spectral characterization of the PCA-PV benchmark facility
International Nuclear Information System (INIS)
Stallmann, F.W.; Kam, F.B.K.; Fabry, A.
1980-01-01
The Pool Critical Assembly (PCA) at the Oak Ridge National Laboratory is being used to generate the PCA-PV benchmark neutron field. A configuration consisting of steel blocks and water gaps is used to simulate the thermal shield pressure vessel configurations in power reactors. The distances between the steel blocks can be changed so that the penetration of neutrons through water and steel can be determined and compared for many different configurations. Easy access and low flux levels make it possible to conduct extensive measurements using active and passive neutron dosimetry, which are impossible to perform in commercial reactors. The clean core and simple geometry facilitates neutron transport calculations which can be validated in detail by comparison with measurements. A facility which has the same configuration of water and steel as the PCA-PV facility but contains test specimens for materials testing, will be irradiated in the higher fluxes at the Oak Ridge Research Reactor. Using the results from the PCA-PV facility, the correlation between neutron flux-fluences and radiation damage in steel can be established. This facility is being discussed in a separate paper
International Nuclear Information System (INIS)
2010-01-01
The overall objective of the CRP is contributing to the generic R&D efforts in various fields common to innovative fast neutron system development, i.e., heavy liquid metal thermal hydraulics, dedicated transmutation fuels and associated core designs, theoretical nuclear reaction models, measurement and evaluation of nuclear data for transmutation, and development and validation of calculational methods and codes. Ultimately, the CRP’s overall objective is to make contributions towards the realization of a transmutation demonstration facility
Parameter Curation for Benchmark Queries
Gubichev, Andrey; Boncz, Peter
2014-01-01
In this paper we consider the problem of generating parameters for benchmark queries so these have stable behavior despite being executed on datasets (real-world or synthetic) with skewed data distributions and value correlations. We show that uniform random sampling of the substitution parameters
Benchmarked Library Websites Comparative Study
Ramli, Rindra M.
2015-01-01
This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.
PRISMATIC CORE COUPLED TRANSIENT BENCHMARK
Energy Technology Data Exchange (ETDEWEB)
J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov
2011-06-01
The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.
Simple mathematical law benchmarks human confrontations
Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto
2013-12-01
Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.
Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments
International Nuclear Information System (INIS)
Pevey, Ronald E.
2005-01-01
Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL
Quality management benchmarking: FDA compliance in pharmaceutical industry.
Jochem, Roland; Landgraf, Katja
2010-01-01
By analyzing and comparing industry and business best practice, processes can be optimized and become more successful mainly because efficiency and competitiveness increase. This paper aims to focus on some examples. Case studies are used to show knowledge exchange in the pharmaceutical industry. Best practice solutions were identified in two companies using a benchmarking method and five-stage model. Despite large administrations, there is much potential regarding business process organization. This project makes it possible for participants to fully understand their business processes. The benchmarking method gives an opportunity to critically analyze value chains (a string of companies or players working together to satisfy market demands for a special product). Knowledge exchange is interesting for companies that like to be global players. Benchmarking supports information exchange and improves competitive ability between different enterprises. Findings suggest that the five-stage model improves efficiency and effectiveness. Furthermore, the model increases the chances for reaching targets. The method gives security to partners that did not have benchmarking experience. The study identifies new quality management procedures. Process management and especially benchmarking is shown to support pharmaceutical industry improvements.
Energy Technology Data Exchange (ETDEWEB)
Suter, G.W. II [Oak Ridge National Lab., TN (United States); Mabrey, J.B. [University of West Florida, Pensacola, FL (United States)
1994-07-01
This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility.
Energy Technology Data Exchange (ETDEWEB)
Poullot, G.; Dumont, V.; Anno, J.; Cousinou, P. [Institut de Radioprotection et de Surete Nucleaire (IRSN), 92 - Fontenay aux Roses (France); Grivot, P.; Girault, E.; Fouillaud, P.; Barbry, F. [CEA Valduc, 21 - Is-sur-Tille (France)
2003-07-01
The group ' International Criticality Safety Evaluation Benchmark evaluation project ' (I.C.S.B.E.P.) has for aim to supply to the international community experiments of benchmarks criticality, of certified quality, used to guarantee the qualification of criticality calculation codes. Have been defined: a structure of experiments classification, a format of standard presentation, a structure of work with evaluation, internal and external checks, presentation in plenary session. After favourable opinion of the work group, the synthesis document called evaluation is integrated to the general report I.C.S.B.E.P. (N.C.)
Effects of exposure imprecision on estimation of the benchmark dose
DEFF Research Database (Denmark)
Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe
2004-01-01
In regression analysis failure to adjust for imprecision in the exposure variable is likely to lead to underestimation of the exposure effect. However, the consequences of exposure error for determination of safe doses of toxic substances have so far not received much attention. The benchmark...... approach is one of the most widely used methods for development of exposure limits. An important advantage of this approach is that it can be applied to observational data. However, in this type of data, exposure markers are seldom measured without error. It is shown that, if the exposure error is ignored......, then the benchmark approach produces results that are biased toward higher and less protective levels. It is therefore important to take exposure measurement error into account when calculating benchmark doses. Methods that allow this adjustment are described and illustrated in data from an epidemiological study...
REVISED STREAM CODE AND WASP5 BENCHMARK
International Nuclear Information System (INIS)
Chen, K
2005-01-01
STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within ±20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within ±3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls
International Nuclear Information System (INIS)
Li, D.
1980-01-01
Reviewed is the effect of heat flux of different system parameters on critical density in order to give an initial view on the value of several parameters. A thorough analysis of different equations is carried out to calculate burnout is steam-water flows in uniformly heated tubes, annular, and rectangular channels and rod bundles. Effect of heat flux density distribution and flux twisting on burnout and storage determination according to burnout are commended [ru
International benchmark on the natural convection test in Phenix reactor
International Nuclear Information System (INIS)
Tenchine, D.; Pialla, D.; Fanning, T.H.; Thomas, J.W.; Chellapandi, P.; Shvetsov, Y.; Maas, L.; Jeong, H.-Y.; Mikityuk, K.; Chenu, A.; Mochizuki, H.; Monti, S.
2013-01-01
Highlights: ► Phenix main characteristics, instrumentation and natural convection test are described. ► “Blind” calculations and post-test calculations from all the participants to the benchmark are compared to reactor data. ► Lessons learned from the natural convection test and the associated calculations are discussed. -- Abstract: The French Phenix sodium cooled fast reactor (SFR) started operation in 1973 and was stopped in 2009. Before the reactor was definitively shutdown, several final tests were planned and performed, including a natural convection test in the primary circuit. During this natural convection test, the heat rejection provided by the steam generators was disabled, followed several minutes later by reactor scram and coast-down of the primary pumps. The International Atomic Energy Agency (IAEA) launched a Coordinated Research Project (CRP) named “control rod withdrawal and sodium natural circulation tests performed during the Phenix end-of-life experiments”. The overall purpose of the CRP was to improve the Member States’ analytical capabilities in the field of SFR safety. An international benchmark on the natural convection test was organized with “blind” calculations in a first step, then “post-test” calculations and sensitivity studies compared with reactor measurements. Eight organizations from seven Member States took part in the benchmark: ANL (USA), CEA (France), IGCAR (India), IPPE (Russian Federation), IRSN (France), KAERI (Korea), PSI (Switzerland) and University of Fukui (Japan). Each organization performed computations and contributed to the analysis and global recommendations. This paper summarizes the findings of the CRP benchmark exercise associated with the Phenix natural convection test, including blind calculations, post-test calculations and comparisons with measured data. General comments and recommendations are pointed out to improve future simulations of natural convection in SFRs
Simplified two and three dimensional HTTR benchmark problems
International Nuclear Information System (INIS)
Zhang Zhan; Rahnema, Farzad; Zhang Dingkang; Pounders, Justin M.; Ougouag, Abderrafi M.
2011-01-01
To assess the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of whole core configurations. In this paper we have created two and three dimensional numerical benchmark problems typical of high temperature gas cooled prismatic cores. Additionally, a single cell and single block benchmark problems are also included. These problems were derived from the HTTR start-up experiment. Since the primary utility of the benchmark problems is in code-to-code verification, minor details regarding geometry and material specification of the original experiment have been simplified while retaining the heterogeneity and the major physics properties of the core from a neutronics viewpoint. A six-group material (macroscopic) cross section library has been generated for the benchmark problems using the lattice depletion code HELIOS. Using this library, Monte Carlo solutions are presented for three configurations (all-rods-in, partially-controlled and all-rods-out) for both the 2D and 3D problems. These solutions include the core eigenvalues, the block (assembly) averaged fission densities, local peaking factors, the absorption densities in the burnable poison and control rods, and pin fission density distribution for selected blocks. Also included are the solutions for the single cell and single block problems.
Benchmarking computer platforms for lattice QCD applications
International Nuclear Information System (INIS)
Hasenbusch, M.; Jansen, K.; Pleiter, D.; Wegner, P.; Wettig, T.
2003-09-01
We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E, Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC. (orig.)
Benchmarking computer platforms for lattice QCD applications
International Nuclear Information System (INIS)
Hasenbusch, M.; Jansen, K.; Pleiter, D.; Stueben, H.; Wegner, P.; Wettig, T.; Wittig, H.
2004-01-01
We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E; Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC
Ab initio and DFT benchmarking of tungsten nanoclusters and tungsten hydrides
International Nuclear Information System (INIS)
Skoviera, J.; Novotny, M.; Cernusak, I.; Oda, T.; Louis, F.
2015-01-01
We present several benchmark calculations comparing wave-function based methods and density functional theory for model systems containing tungsten. They include W 4 cluster as well as W 2 , WH and WH 2 molecules. (authors)
Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners
Directory of Open Access Journals (Sweden)
Luštický Martin
2012-03-01
Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.
Benchmarking clinical photography services in the NHS.
Arbon, Giles
2015-01-01
Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.
Sensitivity analysis of critical experiments with evaluated nuclear data libraries
International Nuclear Information System (INIS)
Fujiwara, D.; Kosaka, S.
2008-01-01
Criticality benchmark testing was performed with evaluated nuclear data libraries for thermal, low-enriched uranium fuel rod applications. C/E values for k eff were calculated with the continuous-energy Monte Carlo code MVP2 and its libraries generated from Endf/B-VI.8, Endf/B-VII.0, JENDL-3.3 and JEFF-3.1. Subsequently, the observed k eff discrepancies between libraries were decomposed to specify the source of difference in the nuclear data libraries using sensitivity analysis technique. The obtained sensitivity profiles are also utilized to estimate the adequacy of cold critical experiments to the boiling water reactor under hot operating condition. (authors)
Directory of Open Access Journals (Sweden)
RYAN N. BRATTON
2014-06-01
Full Text Available A Nuclear Energy Agency (NEA, Organization for Economic Co-operation and Development (OECD benchmark for Uncertainty Analysis in Modeling (UAM is defined in order to facilitate the development and validation of available uncertainty analysis and sensitivity analysis methods for best-estimate Light water Reactor (LWR design and safety calculations. The benchmark has been named the OECD/NEA UAM-LWR benchmark, and has been divided into three phases each of which focuses on a different portion of the uncertainty propagation in LWR multi-physics and multi-scale analysis. Several different reactor cases are modeled at various phases of a reactor calculation. This paper discusses Phase I, known as the “Neutronics Phase”, which is devoted mostly to the propagation of nuclear data (cross-section uncertainty throughout steady-state stand-alone neutronics core calculations. Three reactor systems (for which design, operation and measured data are available are rigorously studied in this benchmark: Peach Bottom Unit 2 BWR, Three Mile Island Unit 1 PWR, and VVER-1000 Kozloduy-6/Kalinin-3. Additional measured data is analyzed such as the KRITZ LEU criticality experiments and the SNEAK-7A and 7B experiments of the Karlsruhe Fast Critical Facility. Analyzed results include the top five neutron-nuclide reactions, which contribute the most to the prediction uncertainty in keff, as well as the uncertainty in key parameters of neutronics analysis such as microscopic and macroscopic cross-sections, six-group decay constants, assembly discontinuity factors, and axial and radial core power distributions. Conclusions are drawn regarding where further studies should be done to reduce uncertainties in key nuclide reaction uncertainties (i.e.: 238U radiative capture and inelastic scattering (n, n’ as well as the average number of neutrons released per fission event of 239Pu.
A Framework for Urban Transport Benchmarking
Theuns Henning; Mohammed Dalil Essakali; Jung Eun Oh
2011-01-01
This report summarizes the findings of a study aimed at exploring key elements of a benchmarking framework for urban transport. Unlike many industries where benchmarking has proven to be successful and straightforward, the multitude of the actors and interactions involved in urban transport systems may make benchmarking a complex endeavor. It was therefore important to analyze what has bee...
Computational methods for nuclear criticality safety analysis
International Nuclear Information System (INIS)
Maragni, M.G.
1992-01-01
Nuclear criticality safety analyses require the utilization of methods which have been tested and verified against benchmarks results. In this work, criticality calculations based on the KENO-IV and MCNP codes are studied aiming the qualification of these methods at the IPEN-CNEN/SP and COPESP. The utilization of variance reduction techniques is important to reduce the computer execution time, and several of them are analysed. As practical example of the above methods, a criticality safety analysis for the storage tubes for irradiated fuel elements from the IEA-R1 research has been carried out. This analysis showed that the MCNP code is more adequate for problems with complex geometries, and the KENO-IV code shows conservative results when it is not used the generalized geometry option. (author)
The development of code benchmarks
International Nuclear Information System (INIS)
Glass, R.E.
1986-01-01
Sandia National Laboratories has undertaken a code benchmarking effort to define a series of cask-like problems having both numerical solutions and experimental data. The development of the benchmarks includes: (1) model problem definition, (2) code intercomparison, and (3) experimental verification. The first two steps are complete and a series of experiments are planned. The experiments will examine the elastic/plastic behavior of cylinders for both the end and side impacts resulting from a nine meter drop. The cylinders will be made from stainless steel and aluminum to give a range of plastic deformations. This paper presents the results of analyses simulating the model's behavior using materials properties for stainless steel and aluminum
Benchmark Results for Few-Body Hypernuclei
Ferrari Ruffino, F.; Lonardoni, D.; Barnea, N.; Deflorian, S.; Leidemann, W.; Orlandini, G.; Pederiva, F.
2017-05-01
The Non-Symmetrized Hyperspherical Harmonics method (NSHH) is introduced in the hypernuclear sector and benchmarked with three different ab-initio methods, namely the Auxiliary Field Diffusion Monte Carlo method, the Faddeev-Yakubovsky approach and the Gaussian Expansion Method. Binding energies and hyperon separation energies of three- to five-body hypernuclei are calculated by employing the two-body Λ N component of the phenomenological Bodmer-Usmani potential (Bodmer and Usmani in Nucl Phys A 477:621, 1988; Usmani and Khanna in J Phys G 35:025105, 2008), and a hyperon-nucleon interaction (Hiyama et al. in Phus Rev C 65:011301, 2001) simulating the scattering phase shifts given by NSC97f (Rijken et al. in Phys Rev C 59:21, 1999). The range of applicability of the NSHH method is briefly discussed.
Development of solutions to benchmark piping problems
Energy Technology Data Exchange (ETDEWEB)
Reich, M; Chang, T Y; Prachuktam, S; Hartzman, M
1977-12-01
Benchmark problems and their solutions are presented. The problems consist in calculating the static and dynamic response of selected piping structures subjected to a variety of loading conditions. The structures range from simple pipe geometries to a representative full scale primary nuclear piping system, which includes the various components and their supports. These structures are assumed to behave in a linear elastic fashion only, i.e., they experience small deformations and small displacements with no existing gaps, and remain elastic through their entire response. The solutions were obtained by using the program EPIPE, which is a modification of the widely available program SAP IV. A brief outline of the theoretical background of this program and its verification is also included.
Thermal Performance Benchmarking: Annual Report
Energy Technology Data Exchange (ETDEWEB)
Moreno, Gilbert
2016-04-08
The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.
Closed-Loop Neuromorphic Benchmarks
Directory of Open Access Journals (Sweden)
Terrence C Stewart
2015-12-01
Full Text Available Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is evenmore difficult when the task of interest is a closed-loop task; that is, a task where the outputfrom the neuromorphic hardware affects some environment, which then in turn affects thehardware’s future input. However, closed-loop situations are one of the primary potential uses ofneuromorphic hardware. To address this, we present a methodology for generating closed-loopbenchmarks that makes use of a hybrid of real physical embodiment and a type of minimalsimulation. Minimal simulation has been shown to lead to robust real-world performance, whilestill maintaining the practical advantages of simulation, such as making it easy for the samebenchmark to be used by many researchers. This method is flexible enough to allow researchersto explicitly modify the benchmarks to identify specific task domains where particular hardwareexcels. To demonstrate the method, we present a set of novel benchmarks that focus on motorcontrol for an arbitrary system with unknown external forces. Using these benchmarks, we showthat an error-driven learning rule can consistently improve motor control performance across arandomly generated family of closed-loop simulations, even when there are up to 15 interactingjoints to be controlled.
Nuclear critical safety analysis for UX-30 transport of freight package
International Nuclear Information System (INIS)
Quan Yanhui; Zhou Qi; Yin Shenggui
2014-01-01
The nuclear critical safety analysis and evaluation for UX-30 transport freight package in the natural condition and accident condition were carried out with MONK-9A code and MCNP code. Firstly, the critical benchmark experiment data of public in international were selected, and the deflection and subcritical limiting value with MONK-9A code and MCNP code in calculating same material form were validated and confirmed. Secondly, the neutron efficiency multiplication factors in the natural condition and accident condition were calculated and analyzed, and the safety in transport process was evaluated by taking conservative suppose of nuclear critical safety. The calculation results show that the max value of k eff for UX-30 transport freight package is less than the subcritical limiting value, and the UX-30 transport freight package is in the state of subcritical safety. Moreover, the critical safety index (CSI) for UX-30 package can define zero based on the definition of critical safety index. (authors)
Energy Technology Data Exchange (ETDEWEB)
Sogn, T.A.; Stuanes, A.O.; Abrahamsen, G.
1996-01-01
The conference paper deals with the accumulation of nitrogen in forests in Norway. The level of accumulation is a critical factor for the calculation of load limits. The paper compares the average rapidity values of accumulation since the last glacial age with the calculated values from the more short-lasting period based on data from surveying programs of the State Pollution Control Authority, manuring experiments, and other relevant research programs in this field. 8 refs., 1 fig., 1 tab.
Adjoint electron Monte Carlo calculations
International Nuclear Information System (INIS)
Jordan, T.M.
1986-01-01
Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment
Development and benchmark analysis of the hybrid evaluated nuclear data library HENDL1.0
International Nuclear Information System (INIS)
Xu Dezheng; Wu Yican; Gao Chunjing; Zheng Shanliang; Li Jingjing; Zhu Xiaoxiang; Liu Haibo
2004-01-01
To meet the requirements of fusion-fission sub-critical hybrid reactor design and the other related studies, the evaluate nuclear data library named HENDL1.0/E has been constituted based on the several main national evaluated data libraries. The relevant working libraries including transport sub-libraries HENDL1.0/MG in groupwise form, HENDL1.0/MC in pointwise form, and the burnup sub-library HENDL1.0/BU and response function sub-library HENDL1.0/RF are generated using the nuclear data processing codes NJOY97 and TRANSX2. The simulating calculation and comparative analysis are carried out against a series of existing benchmark test experiments with popular neutron transport codes, in order to validate the correctness and availability of the HENDL1.0. (authors)
Emhjellen, Kjetil
1997-01-01
Avhandling (dr.ing.) - Høgskolen i Telemark / Norges teknisk-naturvitenskapelige universitet Since the first publication on benchmarking in 1989 by Robert C. Camp of “Benchmarking: The search for Industry Best Practices that Lead to Superior Performance”, the improvement technique benchmarking has been established as an important tool in the process focused manufacturing or production environment. The use of benchmarking has expanded to other types of industry. Benchmarking has past t...
Argonne Code Center: Benchmark problem book.
Energy Technology Data Exchange (ETDEWEB)
None, None
1977-06-01
This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.
Benchmark of neutron production cross sections with Monte Carlo codes
Tsai, Pi-En; Lai, Bo-Lun; Heilbronn, Lawrence H.; Sheu, Rong-Jiun
2018-02-01
Aiming to provide critical information in the fields of heavy ion therapy, radiation shielding in space, and facility design for heavy-ion research accelerators, the physics models in three Monte Carlo simulation codes - PHITS, FLUKA, and MCNP6, were systematically benchmarked with comparisons to fifteen sets of experimental data for neutron production cross sections, which include various combinations of 12C, 20Ne, 40Ar, 84Kr and 132Xe projectiles and natLi, natC, natAl, natCu, and natPb target nuclides at incident energies between 135 MeV/nucleon and 600 MeV/nucleon. For neutron energies above 60% of the specific projectile energy per nucleon, the LAQGMS03.03 in MCNP6, the JQMD/JQMD-2.0 in PHITS, and the RQMD-2.4 in FLUKA all show a better agreement with data in heavy-projectile systems than with light-projectile systems, suggesting that the collective properties of projectile nuclei and nucleon interactions in the nucleus should be considered for light projectiles. For intermediate-energy neutrons whose energies are below the 60% projectile energy per nucleon and above 20 MeV, FLUKA is likely to overestimate the secondary neutron production, while MCNP6 tends towards underestimation. PHITS with JQMD shows a mild tendency for underestimation, but the JQMD-2.0 model with a modified physics description for central collisions generally improves the agreement between data and calculations. For low-energy neutrons (below 20 MeV), which are dominated by the evaporation mechanism, PHITS (which uses GEM linked with JQMD and JQMD-2.0) and FLUKA both tend to overestimate the production cross section, whereas MCNP6 tends to underestimate more systems than to overestimate. For total neutron production cross sections, the trends of the benchmark results over the entire energy range are similar to the trends seen in the dominate energy region. Also, the comparison of GEM coupled with either JQMD or JQMD-2.0 in the PHITS code indicates that the model used to describe the first
Thermal reactor benchmark testing of 69 group library
International Nuclear Information System (INIS)
Liu Guisheng; Wang Yaoqing; Liu Ping; Zhang Baocheng
1994-01-01
Using a code system NSLINK, AMPX master library in WIMS 69 groups structure are made from nuclides relating to 4 newest evaluated nuclear data libraries. Some integrals of 10 thermal reactor benchmark assemblies recommended by the U.S. CSEWG are calculated using rectified PASC-1 code system and compared with foreign results, the authors results are in good agreement with others. 69 group libraries of evaluated data bases in TPFAP interface file are generated with NJOY code system. The k ∞ values of 6 cell lattice assemblies are calculated by the code CBM. The calculated results are analysed and compared
Calculation of neutron importance function in fissionable assemblies using Monte Carlo method
International Nuclear Information System (INIS)
Feghhi, S.A.H.; Shahriari, M.; Afarideh, H.
2007-01-01
The purpose of the present work is to develop an efficient solution method for the calculation of neutron importance function in fissionable assemblies for all criticality conditions, based on Monte Carlo calculations. The neutron importance function has an important role in perturbation theory and reactor dynamic calculations. Usually this function can be determined by calculating the adjoint flux while solving the adjoint weighted transport equation based on deterministic methods. However, in complex geometries these calculations are very complicated. In this article, considering the capabilities of MCNP code in solving problems with complex geometries and its closeness to physical concepts, a comprehensive method based on the physical concept of neutron importance has been introduced for calculating the neutron importance function in sub-critical, critical and super-critical conditions. For this propose a computer program has been developed. The results of the method have been benchmarked with ANISN code calculations in 1 and 2 group modes for simple geometries. The correctness of these results has been confirmed for all three criticality conditions. Finally, the efficiency of the method for complex geometries has been shown by the calculation of neutron importance in Miniature Neutron Source Reactor (MNSR) research reactor