Benchmarking criticality safety calculations with subcritical experiments
International Nuclear Information System (INIS)
Mihalczo, J.T.
1984-06-01
Calculation of the neutron multiplication factor at delayed criticality may be necessary for benchmarking calculations but it may not be sufficient. The use of subcritical experiments to benchmark criticality safety calculations could result in substantial savings in fuel material costs for experiments. In some cases subcritical configurations could be used to benchmark calculations where sufficient fuel to achieve delayed criticality is not available. By performing a variety of measurements with subcritical configurations, much detailed information can be obtained which can be compared directly with calculations. This paper discusses several measurements that can be performed with subcritical assemblies and presents examples that include comparisons between calculation and experiment where possible. Where not, examples from critical experiments have been used but the measurement methods could also be used for subcritical experiments
Analysis on First Criticality Benchmark Calculation of HTR-10 Core
International Nuclear Information System (INIS)
Zuhair; Ferhat-Aziz; As-Natio-Lasman
2000-01-01
HTR-10 is a graphite-moderated and helium-gas cooled pebble bed reactor with an average helium outlet temperature of 700 o C and thermal power of 10 MW. The first criticality benchmark problem of HTR-10 in this paper includes the loading number calculation of nuclear fuel in the form of UO 2 ball with U-235 enrichment of 17% for the first criticality under the helium atmosphere and core temperature of 20 o C, and the effective multiplication factor (k eff ) calculation of full core (5 m 3 ) under the helium atmosphere and various core temperatures. The group constants of fuel mixture, moderator and reflector materials were generated with WlMS/D4 using spherical model and 4 neutron energy group. The critical core height of 150.1 cm obtained from CITATION in 2-D R-Z reactor geometry exists in the calculation range of INET China, JAERI Japan and BATAN Indonesia, and OKBM Russia. The k eff calculation result of full core at various temperatures shows that the HTR-10 has negative temperature coefficient of reactivity. (author)
International Nuclear Information System (INIS)
Richet, Y.; Jacquet, O.; Bay, X.
2005-01-01
The accuracy of an Iterative Monte Carlo calculation requires the convergence of the simulation output process. The present paper deals with a post processing algorithm to suppress the transient due to initialization applied on criticality calculations. It should be noticed that this initial transient suppression aims only at obtaining a stationary output series, then the convergence of the calculation needs to be guaranteed independently. The transient suppression algorithm consists in a repeated truncation of the first observations of the output process. The truncation of the first observations is performed as long as a steadiness test based on Brownian bridge theory is negative. This transient suppression method was previously tuned for a simplified model of criticality calculations, although this paper focuses on the efficiency on real criticality calculations. The efficiency test is based on four benchmarks with strong source convergence problems: 1) a checkerboard storage of fuel assemblies, 2) a pin cell array with irradiated fuel, 3) 3 one-dimensional thick slabs, and 4) an array of interacting fuel spheres. It appears that the transient suppression method needs to be more widely validated on real criticality calculations before any blind using as a post processing in criticality codes
Benchmark test of JEF-1 evaluation by calculating fast criticalities
International Nuclear Information System (INIS)
Pelloni, S.
1986-06-01
JEF-1 basic evaluation was tested by calculating fast critical experiments using the cross section discrete-ordinates transport code ONEDANT with P/sub 3/S/sub 16/ approximation. In each computation a spherical one dimensional model was used, together with a 174 neutron group VITAMIN-E structured JEF-1 based nuclear data library, generated at EIR with NJOY and TRANSX-CTR. It is found that the JEF-1 evaluation gives accurate results comparable with ENDF/B-V and that eigenvalues agree well within 10 mk whereas reaction rates deviate by up to 10% from the experiment. U-233 total and fission cross sections seem to be underestimated in the JEF-1 evaluation in the fast energy range between 0.1 and 1 MeV. This confirms previous analysis based on diffusion theory with 71 neutron groups, performed by H. Takano and E. Sartori at NEA Data Bank. (author)
Validation of VHTRC calculation benchmark of critical experiment using the MCB code
Directory of Open Access Journals (Sweden)
Stanisz Przemysław
2016-01-01
Full Text Available The calculation benchmark problem Very High Temperature Reactor Critical (VHTR a pin-in-block type core critical assembly has been investigated with the Monte Carlo Burnup (MCB code in order to validate the latest version of Nuclear Data Library based on ENDF format. Executed benchmark has been made on the basis of VHTR benchmark available from the International Handbook of Evaluated Reactor Physics Benchmark Experiments. This benchmark is useful for verifying the discrepancies in keff values between various libraries and experimental values. This allows to improve accuracy of the neutron transport calculations that may help in designing the high performance commercial VHTRs. Almost all safety parameters depend on the accuracy of neutron transport calculation results that, in turn depend on the accuracy of nuclear data libraries. Thus, evaluation of the libraries applicability to VHTR modelling is one of the important subjects. We compared the numerical experiment results with experimental measurements using two versions of available nuclear data (ENDF-B-VII.1 and JEFF-3.2 prepared for required temperatures. Calculations have been performed with the MCB code which allows to obtain very precise representation of complex VHTR geometry, including the double heterogeneity of a fuel element. In this paper, together with impact of nuclear data, we discuss also the impact of different lattice modelling inside the fuel pins. The discrepancies of keff have been successfully observed and show good agreement with each other and with the experimental data within the 1 σ range of the experimental uncertainty. Because some propagated discrepancies observed, we proposed appropriate corrections in experimental constants which can improve the reactivity coefficient dependency. Obtained results confirm the accuracy of the new Nuclear Data Libraries.
Criticality reference benchmark calculations for burnup credit using spent fuel isotopics
International Nuclear Information System (INIS)
Bowman, S.M.
1991-04-01
To date, criticality analyses performed in support of the certification of spent fuel casks in the United States do not take credit for the reactivity reduction that results from burnup. By taking credit for the fuel burnup, commonly referred to as ''burnup credit,'' the fuel loading capacity of these casks can be increased. One of the difficulties in implementing burnup credit in criticality analyses is that there have been no critical experiments performed with spent fuel which can be used for computer code validation. In lieu of that, a reference problem set of fresh fuel critical experiments which model various conditions typical of light water reactor (LWR) transportation and storage casks has been identified and used in the validation of SCALE-4. This report documents the use of this same problem set to perform spent fuel criticality benchmark calculations by replacing the actual fresh fuel isotopics from the experiments with six different sets of calculated spent fuel isotopics. The SCALE-4 modules SAS2H and CSAS4 were used to perform the analyses. These calculations do not model actual critical experiments. The calculated k-effectives are not supposed to equal unity and will vary depending on the initial enrichment and burnup of the calculated spent fuel isotopics. 12 refs., 11 tabs
OECD/NEA burnup credit calculational criticality benchmark Phase I-B results
Energy Technology Data Exchange (ETDEWEB)
DeHart, M.D.; Parks, C.V. [Oak Ridge National Lab., TN (United States); Brady, M.C. [Sandia National Labs., Las Vegas, NV (United States)
1996-06-01
In most countries, criticality analysis of LWR fuel stored in racks and casks has assumed that the fuel is fresh with the maximum allowable initial enrichment. This assumption has led to the design of widely spaced and/or highly poisoned storage and transport arrays. If credit is assumed for fuel burnup, initial enrichment limitations can be raised in existing systems, and more compact and economical arrays can be designed. Such reliance on the reduced reactivity of spent fuel for criticality control is referred to as burnup credit. The Burnup Credit Working Group, formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development, has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods agree to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods agree within 11% about the average for all fission products studied. Most deviations are less than 10%, and many are less than 5%. The exceptions are Sm 149, Sm 151, and Gd 155.
OECD/NEA burnup credit calculational criticality benchmark Phase I-B results
International Nuclear Information System (INIS)
DeHart, M.D.; Parks, C.V.; Brady, M.C.
1996-06-01
In most countries, criticality analysis of LWR fuel stored in racks and casks has assumed that the fuel is fresh with the maximum allowable initial enrichment. This assumption has led to the design of widely spaced and/or highly poisoned storage and transport arrays. If credit is assumed for fuel burnup, initial enrichment limitations can be raised in existing systems, and more compact and economical arrays can be designed. Such reliance on the reduced reactivity of spent fuel for criticality control is referred to as burnup credit. The Burnup Credit Working Group, formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development, has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods agree to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods agree within 11% about the average for all fission products studied. Most deviations are less than 10%, and many are less than 5%. The exceptions are Sm 149, Sm 151, and Gd 155
International Nuclear Information System (INIS)
Oh, I.; Rothe, R.E.
1978-01-01
Criticality calculations on minimally reflected, concrete-reflected, and plastic-reflected single tanks and on arrays of cylinders reflected by concrete and plastic have been performed using the KENO-IV code with 16-group Hansen-Roach neutron cross sections. The fissile material was high-enriched (93.17% 235 U) uranyl nitrate [UO 2 (NO 3 ) 2 ] solution. Calculated results are compared with those from a benchmark critical experiments program to provide the best possible verification of the calculational technique. The calculated k/sub eff/'s underestimate the critical condition by an average of 1.28% for the minimally reflected single tanks, 1.09% for the concrete-reflected single tanks, 0.60% for the plastic-reflected single tanks, 0.75% for the concrete-reflected arrays of cylinders, and 0.51% for the plastic-reflected arrays of cylinders. More than half of the present comparisons were within 1% of the experimental values, and the worst calculational and experimental discrepancy was 2.3% in k/sub eff/ for the KENO calculations
Energy Technology Data Exchange (ETDEWEB)
Okuno, Hiroshi; Naito, Yoshitaka [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ando, Yoshihira [Toshiba Corp., Kawasaki, Kanagawa (Japan)
2000-09-01
The report describes the final results of Phase IIIA Benchmarks conducted by the Burnup Credit Criticality Calculation Working Group under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD/NEA). The benchmarks are intended to confirm the predictive capability of the current computer code and data library combinations for the neutron multiplication factor (k{sub eff}) of a layer of irradiated BWR fuel assembly array model. In total 22 benchmark problems are proposed for calculations of k{sub eff}. The effects of following parameters are investigated: cooling time, inclusion/exclusion of FP nuclides and axial burnup profile, and inclusion of axial profile of void fraction or constant void fractions during burnup. Axial profiles of fractional fission rates are further requested for five cases out of the 22 problems. Twenty-one sets of results are presented, contributed by 17 institutes from 9 countries. The relative dispersion of k{sub eff} values calculated by the participants from the mean value is almost within the band of {+-}1%{delta}k/k. The deviations from the averaged calculated fission rate profiles are found to be within {+-}5% for most cases. (author)
OECD/NEA Burnup Credit Calculational Criticality Benchmark Phase I-B Results
Energy Technology Data Exchange (ETDEWEB)
DeHart, M.D.
1993-01-01
Burnup credit is an ongoing technical concern for many countries that operate commercial nuclear power reactors. In a multinational cooperative effort to resolve burnup credit issues, a Burnup Credit Working Group has been formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development. This working group has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide, and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods are in agreement to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods are within 11% agreement about the average for all fission products studied. Furthermore, most deviations are less than 10%, and many are less than 5%. The exceptions are {sup 149}Sm, {sup 151}Sm, and {sup 155}Gd.
OECD/NEA Burnup Credit Calculational Criticality Benchmark Phase I-B Results
International Nuclear Information System (INIS)
DeHart, M.D.
1993-01-01
Burnup credit is an ongoing technical concern for many countries that operate commercial nuclear power reactors. In a multinational cooperative effort to resolve burnup credit issues, a Burnup Credit Working Group has been formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development. This working group has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide, and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods are in agreement to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods are within 11% agreement about the average for all fission products studied. Furthermore, most deviations are less than 10%, and many are less than 5%. The exceptions are 149 Sm, 151 Sm, and 155 Gd
Handbook of critical experiments benchmarks
International Nuclear Information System (INIS)
Durst, B.M.; Bierman, S.R.; Clayton, E.D.
1978-03-01
Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input
International Nuclear Information System (INIS)
Okuno, Hiroshi
2003-01-01
A method for classifying benchmark results of criticality calculations according to similarity was proposed in this paper. After formulation of the method utilizing correlation coefficients, it was applied to burnup credit criticality benchmarks Phase III-A and II-A, which were conducted by the Expert Group on Burnup Credit Criticality Safety under auspices of the Nuclear Energy Agency of the Organisation for Economic Cooperation and Development (OECD/NEA). Phase III-A benchmark was a series of criticality calculations for irradiated Boiling Water Reactor (BWR) fuel assemblies, whereas Phase II-A benchmark was a suite of criticality calculations for irradiated Pressurized Water Reactor (PWR) fuel pins. These benchmark problems and their results were summarized. The correlation coefficients were calculated and sets of benchmark calculation results were classified according to the criterion that the values of the correlation coefficients were no less than 0.15 for Phase III-A and 0.10 for Phase II-A benchmarks. When a couple of benchmark calculation results belonged to the same group, one calculation result was found predictable from the other. An example was shown for each of the Benchmarks. While the evaluated nuclear data seemed the main factor for the classification, further investigations were required for finding other factors. (author)
Energy Technology Data Exchange (ETDEWEB)
Ivanova, T.; Laville, C. [Institut de Radioprotection et de Surete Nucleaire IRSN, BP 17, 92262 Fontenay aux Roses (France); Dyrda, J. [Atomic Weapons Establishment AWE, Aldermaston, Reading, RG7 4PR (United Kingdom); Mennerdahl, D. [E Mennerdahl Systems EMS, Starvaegen 12, 18357 Taeby (Sweden); Golovko, Y.; Raskach, K.; Tsiboulia, A. [Inst. for Physics and Power Engineering IPPE, 1, Bondarenko sq., 249033 Obninsk (Russian Federation); Lee, G. S.; Woo, S. W. [Korea Inst. of Nuclear Safety KINS, 62 Gwahak-ro, Yuseong-gu, Daejeon 305-338 (Korea, Republic of); Bidaud, A.; Sabouri, P. [Laboratoire de Physique Subatomique et de Cosmologie LPSC, CNRS-IN2P3/UJF/INPG, Grenoble (France); Patel, A. [U.S. Nuclear Regulatory Commission (NRC), Washington, DC 20555-0001 (United States); Bledsoe, K.; Rearden, B. [Oak Ridge National Laboratory ORNL, M.S. 6170, P.O. Box 2008, Oak Ridge, TN 37831 (United States); Gulliford, J.; Michel-Sendis, F. [OECD/NEA, 12, Bd des Iles, 92130 Issy-les-Moulineaux (France)
2012-07-01
The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)
Energy Technology Data Exchange (ETDEWEB)
Leal, L.C.; Wright, R.Q.
1996-10-01
In this report we investigate the adequacy of the available {sup 233}U cross-section data for calculation of experimental critical systems. The {sup 233}U evaluations provided in two evaluated nuclear data libraries, the U.S. Data Bank [ENDF/B (Evaluated Nuclear Data Files)] and the Japanese Data Bank [JENDL (Japanese Evaluated Nuclear Data Library)] are examined. Calculations were performed for six thermal and ten fast experimental critical systems using the S{sub n} transport XSDRNPM code. To verify the performance of the {sup 233}U cross-section data for nuclear criticality safety application in which the neutron energy spectrum is predominantly in the epithermal energy range, calculations of four numerical benchmark systems with energy spectra in the intermediate energy range were done. These calculations serve only as an indication of the difference in calculated results that may be expected when the two {sup 233}U cross-section evaluations are used for problems with neutron spectra in the intermediate energy range. Additionally, comparisons of experimental and calculated central fission rate ratios were also made. The study has suggested that an ad hoc {sup 233}U evaluation based on the JENDL library provides better overall results for both fast and thermal experimental critical systems.
Energy Technology Data Exchange (ETDEWEB)
Leal, L.C.
1993-01-01
In this report we investigate the adequacy of the available {sup 233}U cross-section data for calculation of experimental critical systems. The {sup 233}U evaluations provided in two evaluated nuclear data libraries, the U. S. Data Bank [ENDF/B (Evaluated Nuclear Data Files)] and the Japanese Data Bank [JENDL (Japanese Evaluated Nuclear Data Library)] are examined. Calculations were performed for six thermal and ten fast experimental critical systems using the Sn transport XSDRNPM code. To verify the performance of the {sup 233}U cross-section data for nuclear criticality safety application in which the neutron energy spectrum is predominantly in the epithermal energy range, calculations of four numerical benchmark systems with energy spectra in the intermediate energy range were done. These calculations serve only as an indication of the difference in calculated results that may be expected when the two {sup 233}U cross-section evaluations are used for problems with neutron spectra in the intermediate energy range. Additionally, comparisons of experimental and calculated central fission rate ratios were also made. The study has suggested that an ad hoc {sup 233}U evaluation based on the JENDL library provides better overall results for both fast and thermal experimental critical systems.
MOx Depletion Calculation Benchmark
International Nuclear Information System (INIS)
San Felice, Laurence; Eschbach, Romain; Dewi Syarifah, Ratna; Maryam, Seif-Eddine; Hesketh, Kevin
2016-01-01
Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of Reactor Systems (WPRS) has been established to study the reactor physics, fuel performance, radiation transport and shielding, and the uncertainties associated with modelling of these phenomena in present and future nuclear power systems. The WPRS has different expert groups to cover a wide range of scientific issues in these fields. The Expert Group on Reactor Physics and Advanced Nuclear Systems (EGRPANS) was created in 2011 to perform specific tasks associated with reactor physics aspects of present and future nuclear power systems. EGRPANS provides expert advice to the WPRS and the nuclear community on the development needs (data and methods, validation experiments, scenario studies) for different reactor systems and also provides specific technical information regarding: core reactivity characteristics, including fuel depletion effects; core power/flux distributions; Core dynamics and reactivity control. In 2013 EGRPANS published a report that investigated fuel depletion effects in a Pressurised Water Reactor (PWR). This was entitled 'International Comparison of a Depletion Calculation Benchmark on Fuel Cycle Issues' NEA/NSC/DOC(2013) that documented a benchmark exercise for UO 2 fuel rods. This report documents a complementary benchmark exercise that focused on PuO 2 /UO 2 Mixed Oxide (MOX) fuel rods. The results are especially relevant to the back-end of the fuel cycle, including irradiated fuel transport, reprocessing, interim storage and waste repository. Saint-Laurent B1 (SLB1) was the first French reactor to use MOx assemblies. SLB1 is a 900 MWe PWR, with 30% MOx fuel loading. The standard MOx assemblies, used in Saint-Laurent B1 reactor, include three zones with different plutonium enrichments, high Pu content (5.64%) in the center zone, medium Pu content (4.42%) in the intermediate zone and low Pu content (2.91%) in the peripheral zone
Benchmark calculation of SCALE-PC 4.3 CSAS6 module and burnup credit criticality analysis
Energy Technology Data Exchange (ETDEWEB)
Shin, Hee Sung; Ro, Seong Gy; Shin, Young Joon; Kim, Ik Soo [Korea Atomic Energy Research Institute, Taejon (Korea)
1998-12-01
Calculation biases of SCALE-PC CSAS6 module for PWR spent fuel, metallized spent fuel and solution of nuclear materials have been determined on the basis of the benchmark to be 0.01100, 0.02650 and 0.00997, respectively. With the aid of the code system, nuclear criticality safety analysis for the spent fuel storage pool has been carried out to determine the minimum burnup of spent fuel required for safe storage. The criticality safety analysis is performed using three types of isotopic composition of spent fuel: ORIGEN2-calculated isotopic compositions; the conservative inventory obtained from the multiplication of ORIGEN2-calculated isotopic compositions by isotopic correction factors; the conservative inventory of only U, Pu and {sup 241}Am. The results show that the minimum burnup for three cases are 990,6190 and 7270 MWd/tU, respectively in the case of 5.0 wt% initial enriched spent fuel. (author). 74 refs., 68 figs., 35 tabs.
International Nuclear Information System (INIS)
Okuno, Hiroshi; Naito, Yoshitaka; Suyama, Kenya
2002-02-01
The report describes the final results of the Phase IIIB Benchmark conducted by the Expert Group on Burnup Credit Criticality Safety under the auspices of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD). The Benchmark was intended to compare the predictability of current computer code and data library combinations for the atomic number densities of an irradiated PWR fuel assembly model. The fuel assembly was irradiated under specific power of 25.6 MW/tHM up to 40 GWd/tHM and cooled for five years. The void fraction was assumed to be uniform throughout the channel box and constant, at 0, 40 and 70%, during burnup. In total, 16 results were submitted from 13 institutes of 7 countries. The calculated atomic number densities of 12 actinides and 20 fission product nuclides were found to be for the most part within a range of ±10% relative to the average, although some results, esp. 155 Eu and gadolinium isotopes, exceeded the band, which will require further investigation. Pin-wise burnup results agreed well among the participants. The results in the infinite neutron multiplication factor k ∞ also accorded well with each other for void fractions of 0 and 40%; however some results deviated from the averaged value noticeably for the void fraction of 70%. (author)
Energy Technology Data Exchange (ETDEWEB)
Okuno, Hiroshi; Naito, Yoshitaka; Suyama, Kenya [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
2002-02-01
The report describes the final results of the Phase IIIB Benchmark conducted by the Expert Group on Burnup Credit Criticality Safety under the auspices of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD). The Benchmark was intended to compare the predictability of current computer code and data library combinations for the atomic number densities of an irradiated PWR fuel assembly model. The fuel assembly was irradiated under specific power of 25.6 MW/tHM up to 40 GWd/tHM and cooled for five years. The void fraction was assumed to be uniform throughout the channel box and constant, at 0, 40 and 70%, during burnup. In total, 16 results were submitted from 13 institutes of 7 countries. The calculated atomic number densities of 12 actinides and 20 fission product nuclides were found to be for the most part within a range of {+-}10% relative to the average, although some results, esp. {sup 155}Eu and gadolinium isotopes, exceeded the band, which will require further investigation. Pin-wise burnup results agreed well among the participants. The results in the infinite neutron multiplication factor k{sub {infinity}} also accorded well with each other for void fractions of 0 and 40%; however some results deviated from the averaged value noticeably for the void fraction of 70%. (author)
MCNP calculations for criticality-safety benchmarks with ENDF/B-V and ENDF/B-VI libraries
International Nuclear Information System (INIS)
Iverson, J.L.; Mosteller, R.D.
1995-01-01
The MCNP Monte Carlo code, in conjunction with its continuous-energy ENDF/B-V and ENDF/B-VI cross-section libraries, has been benchmarked against results from 27 different critical experiments. The predicted values of k eff are in excellent agreement with the benchmarks, except for the ENDF/B-V results for solutions of plutonium nitrate and, to a lesser degree, for the ENDF/B-V and ENDF/B-VI results for a bare sphere of 233 U
Benchmark neutron porosity log calculations
International Nuclear Information System (INIS)
Little, R.C.; Michael, M.; Verghese, K.; Gardner, R.P.
1989-01-01
Calculations have been made for a benchmark neutron porosity log problem with the general purpose Monte Carlo code MCNP and the specific purpose Monte Carlo code McDNL. For accuracy and timing comparison purposes the CRAY XMP and MicroVax II computers have been used with these codes. The CRAY has been used for an analog version of the MCNP code while the MicroVax II has been used for the optimized variance reduction versions of both codes. Results indicate that the two codes give the same results within calculated standard deviations. Comparisons are given and discussed for accuracy (precision) and computation times for the two codes
KENO-IV code benchmark calculation, (6)
International Nuclear Information System (INIS)
Nomura, Yasushi; Naito, Yoshitaka; Yamakawa, Yasuhiro.
1980-11-01
A series of benchmark tests has been undertaken in JAERI in order to examine the capability of JAERI's criticality safety evaluation system consisting of the Monte Carlo calculation code KENO-IV and the newly developed multigroup constants library MGCL. The present report describes the results of a benchmark test using criticality experiments about Plutonium fuel in various shape. In all, 33 cases of experiments have been calculated for Pu(NO 3 ) 4 aqueous solution, Pu metal or PuO 2 -polystyrene compact in various shape (sphere, cylinder, rectangular parallelepiped). The effective multiplication factors calculated for the 33 cases distribute widely between 0.955 and 1.045 due to wide range of system variables. (author)
EPRI depletion benchmark calculations using PARAGON
International Nuclear Information System (INIS)
Kucukboyaci, Vefa N.
2015-01-01
Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty
Pool critical assembly pressure vessel facility benchmark
International Nuclear Information System (INIS)
Remec, I.; Kam, F.B.K.
1997-07-01
This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used
PHEBUS-FPTO Benchmark calculations
International Nuclear Information System (INIS)
Shepherd, I.; Ball, A.; Trambauer, K.; Barbero, F.; Olivar Dominguez, F.; Herranz, L.; Biasi, L.; Fermandjian, J.; Hocke, K.
1991-01-01
This report summarizes a set of pre-test predictions made for the first Phebus-FP test, FPT-O. There were many different calculations, performed by various organizations and they represent the first attempt to calculate the whole experimental sequence, from bundle to containment. Quantitative agreement between the various calculations was not good but the particular models in the code responsible for disagreements were mostly identified. A consensus view was formed as to how the test would proceed. It was found that a successful execution of the test will require a different operating procedure than had been assumed here. Critical areas which require close attention are the need to devize a strategy for the power and flow in the bundle that takes account of uncertainties in the modelling and the shroud conductivity and the necessity to develop a reliable method to achieve the desired thermalhydraulic conditions in the containment
HEU benchmark calculations and LEU preliminary calculations for IRR-1
International Nuclear Information System (INIS)
Caner, M.; Shapira, M.; Bettan, M.; Nagler, A.; Gilat, J.
2004-01-01
We performed neutronics calculations for the Soreq Research Reactor, IRR-1. The calculations were done for the purpose of upgrading and benchmarking our codes and methods. The codes used were mainly WIMS-D/4 for cell calculations and the three dimensional diffusion code CITATION for full core calculations. The experimental flux was obtained by gold wire activation methods and compared with our calculated flux profile. The IRR-1 is loaded with highly enriched uranium fuel assemblies, of the plate type. In the framework of preparation for conversion to low enrichment fuel, additional calculations were done assuming the presence of LEU fresh fuel. In these preliminary calculations we investigated the effect on the criticality and flux distributions of the increase of U-238 loading, and the corresponding uranium density.(author)
The International Criticality Safety Benchmark Evaluation Project
International Nuclear Information System (INIS)
Briggs, B. J.; Dean, V. F.; Pesic, M. P.
2001-01-01
In order to properly manage the risk of a nuclear criticality accident, it is important to establish the conditions for which such an accident becomes possible for any activity involving fissile material. Only when this information is known is it possible to establish the likelihood of actually achieving such conditions. It is therefore important that criticality safety analysts have confidence in the accuracy of their calculations. Confidence in analytical results can only be gained through comparison of those results with experimental data. The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the US Department of Energy. The project was managed through the Idaho National Engineering and Environmental Laboratory (INEEL), but involved nationally known criticality safety experts from Los Alamos National Laboratory, Lawrence Livermore National Laboratory, Savannah River Technology Center, Oak Ridge National Laboratory and the Y-12 Plant, Hanford, Argonne National Laboratory, and the Rocky Flats Plant. An International Criticality Safety Data Exchange component was added to the project during 1994 and the project became what is currently known as the International Criticality Safety Benchmark Evaluation Project (ICSBEP). Representatives from the United Kingdom, France, Japan, the Russian Federation, Hungary, Kazakhstan, Korea, Slovenia, Yugoslavia, Spain, and Israel are now participating on the project In December of 1994, the ICSBEP became an official activity of the Organization for Economic Cooperation and Development - Nuclear Energy Agency's (OECD-NEA) Nuclear Science Committee. The United States currently remains the lead country, providing most of the administrative support. The purpose of the ICSBEP is to: (1) identify and evaluate a comprehensive set of critical benchmark data; (2) verify the data, to the extent possible, by reviewing original and subsequently revised documentation, and by talking with the
International Nuclear Information System (INIS)
Obara, Toru; Morozov, A.G.; Kevrolev, V.V.; Kuznetsov, V.V.; Treschalin, S.A.; Lukin, A.V.; Terekhin, V.A.; Sokolov, Yu.A.; Kravchenko, V.G.
2000-01-01
Benchmark calculations were performed for critical experiments at FKBN-M facility in RFNC-VNIITF, Russia using JENDL-3.2 nuclear data library and continuous energy Monte-Carlo code MVP. The fissile materials were high-enriched uranium and plutonium. Polyethylene was used as moderator. The neutron spectrum was changed by changing the geometry. Calculation results by MVP showed some errors. Discussion was made by reaction rates and η values obtained by MVP. It showed the possibility that cross sections of U-235 had different trend of error in fast and thermal energy region respectively. It also showed the possibility of some error of cross section of Pu-239 in high energy region. (author)
Introduction to 'International Handbook of Criticality Safety Benchmark Experiments'
International Nuclear Information System (INIS)
Komuro, Yuichi
1998-01-01
The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) is now an official activity of the Organization for Economic Cooperation and Development-Nuclear Energy Agency (OECD-NEA). 'International Handbook of Criticality Safety Benchmark Experiments' was prepared and is updated year by year by the working group of the project. This handbook contains criticality safety benchmark specifications that have been derived from experiments that were performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used. The author briefly introduces the informative handbook and would like to encourage Japanese engineers who are in charge of nuclear criticality safety to use the handbook. (author)
Benchmark calculation of subchannel analysis codes
International Nuclear Information System (INIS)
1996-02-01
In order to evaluate the analysis capabilities of various subchannel codes used in thermal-hydraulic design of light water reactors, benchmark calculations were performed. The selected benchmark problems and major findings obtained by the calculations were as follows: (1)As for single-phase flow mixing experiments between two channels, the calculated results of water temperature distribution along the flow direction were agreed with experimental results by tuning turbulent mixing coefficients properly. However, the effect of gap width observed in the experiments could not be predicted by the subchannel codes. (2)As for two-phase flow mixing experiments between two channels, in high water flow rate cases, the calculated distributions of air and water flows in each channel were well agreed with the experimental results. In low water flow cases, on the other hand, the air mixing rates were underestimated. (3)As for two-phase flow mixing experiments among multi-channels, the calculated mass velocities at channel exit under steady-state condition were agreed with experimental values within about 10%. However, the predictive errors of exit qualities were as high as 30%. (4)As for critical heat flux(CHF) experiments, two different results were obtained. A code indicated that the calculated CHF's using KfK or EPRI correlations were well agreed with the experimental results, while another code suggested that the CHF's were well predicted by using WSC-2 correlation or Weisman-Pei mechanistic model. (5)As for droplets entrainment and deposition experiments, it was indicated that the predictive capability was significantly increased by improving correlations. On the other hand, a remarkable discrepancy between codes was observed. That is, a code underestimated the droplet flow rate and overestimated the liquid film flow rate in high quality cases, while another code overestimated the droplet flow rate and underestimated the liquid film flow rate in low quality cases. (J.P.N.)
One dimensional benchmark calculations using diffusion theory
International Nuclear Information System (INIS)
Ustun, G.; Turgut, M.H.
1986-01-01
This is a comparative study by using different one dimensional diffusion codes which are available at our Nuclear Engineering Department. Some modifications have been made in the used codes to fit the problems. One of the codes, DIFFUSE, solves the neutron diffusion equation in slab, cylindrical and spherical geometries by using 'Forward elimination- Backward substitution' technique. DIFFUSE code calculates criticality, critical dimensions and critical material concentrations and adjoint fluxes as well. It is used for the space and energy dependent neutron flux distribution. The whole scattering matrix can be used if desired. Normalisation of the relative flux distributions to the reactor power, plotting of the flux distributions and leakage terms for the other two dimensions have been added. Some modifications also have been made for the code output. Two Benchmark problems have been calculated with the modified version and the results are compared with BBD code which is available at our department and uses same techniques of calculation. Agreements are quite good in results such as k-eff and the flux distributions for the two cases studies. (author)
IAEA sodium void reactivity benchmark calculations
International Nuclear Information System (INIS)
Hill, R.N.; Finck, P.J.
1992-01-01
In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated
Benchmark calculations of power distribution within assemblies
International Nuclear Information System (INIS)
Cavarec, C.; Perron, J.F.; Verwaerde, D.; West, J.P.
1994-09-01
The main objective of this Benchmark is to compare different techniques for fine flux prediction based upon coarse mesh diffusion or transport calculations. We proposed 5 ''core'' configurations including different assembly types (17 x 17 pins, ''uranium'', ''absorber'' or ''MOX'' assemblies), with different boundary conditions. The specification required results in terms of reactivity, pin by pin fluxes and production rate distributions. The proposal for these Benchmark calculations was made by J.C. LEFEBVRE, J. MONDOT, J.P. WEST and the specification (with nuclear data, assembly types, core configurations for 2D geometry and results presentation) was distributed to correspondents of the OECD Nuclear Energy Agency. 11 countries and 19 companies answered the exercise proposed by this Benchmark. Heterogeneous calculations and homogeneous calculations were made. Various methods were used to produce the results: diffusion (finite differences, nodal...), transport (P ij , S n , Monte Carlo). This report presents an analysis and intercomparisons of all the results received
Reactor calculation benchmark PCA blind test results
International Nuclear Information System (INIS)
Kam, F.B.K.; Stallmann, F.W.
1980-01-01
Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables
Reactor calculation benchmark PCA blind test results
Energy Technology Data Exchange (ETDEWEB)
Kam, F.B.K.; Stallmann, F.W.
1980-01-01
Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables.
International Criticality Safety Benchmark Evaluation Project (ICSBEP) - ICSBEP 2015 Handbook
International Nuclear Information System (INIS)
Bess, John D.
2015-01-01
The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy (DOE). The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Nuclear Energy Agency (NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirements and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross-section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span approximately 69000 pages and contain 567 evaluations with benchmark specifications for 4874 critical, near-critical or subcritical configurations, 31 criticality alarm placement/shielding configurations with multiple dose points for each, and 207 configurations that have been categorised as fundamental physics measurements that are relevant to criticality safety applications. New to the handbook are benchmark specifications for neutron activation foil and thermoluminescent dosimeter measurements performed at the SILENE critical assembly in Valduc, France as part of a joint venture in 2010 between the US DOE and the French Alternative Energies and Atomic Energy Commission (CEA). A photograph of this experiment is shown on the front cover. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these
The International Criticality Safety Benchmark Evaluation Project (ICSBEP)
International Nuclear Information System (INIS)
Briggs, J.B.
2003-01-01
The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organisation for Economic Cooperation and Development (OECD) - Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Yugoslavia, Kazakhstan, Israel, Spain, and Brazil are now participating. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled 'International Handbook of Evaluated Criticality Safety Benchmark Experiments.' The 2003 Edition of the Handbook contains benchmark model specifications for 3070 critical or subcritical configurations that are intended for validating computer codes that calculate effective neutron multiplication and for testing basic nuclear data. (author)
BENCHMARKING ORTEC ISOTOPIC MEASUREMENTS AND CALCULATIONS
Energy Technology Data Exchange (ETDEWEB)
Dewberry, R; Raymond Sigg, R; Vito Casella, V; Nitin Bhatt, N
2008-09-29
This report represents a description of compiled benchmark tests conducted to probe and to demonstrate the extensive utility of the Ortec ISOTOPIC {gamma}-ray analysis computer program. The ISOTOPIC program performs analyses of {gamma}-ray spectra applied to specific acquisition configurations in order to apply finite-geometry correction factors and sample-matrix-container photon absorption correction factors. The analysis program provides an extensive set of preset acquisition configurations to which the user can add relevant parameters in order to build the geometry and absorption correction factors that the program determines from calculus and from nuclear g-ray absorption and scatter data. The Analytical Development Section field nuclear measurement group of the Savannah River National Laboratory uses the Ortec ISOTOPIC analysis program extensively for analyses of solid waste and process holdup applied to passive {gamma}-ray acquisitions. Frequently the results of these {gamma}-ray acquisitions and analyses are to determine compliance with facility criticality safety guidelines. Another use of results is to designate 55-gallon drum solid waste as qualified TRU waste3 or as low-level waste. Other examples of the application of the ISOTOPIC analysis technique to passive {gamma}-ray acquisitions include analyses of standard waste box items and unique solid waste configurations. In many passive {gamma}-ray acquisition circumstances the container and sample have sufficient density that the calculated energy-dependent transmission correction factors have intrinsic uncertainties in the range 15%-100%. This is frequently the case when assaying 55-gallon drums of solid waste with masses of up to 400 kg and when assaying solid waste in extensive unique containers. Often an accurate assay of the transuranic content of these containers is not required, but rather a good defensible designation as >100 nCi/g (TRU waste) or <100 nCi/g (low level solid waste) is required. In
Benchmark calculations for fusion blanket development
International Nuclear Information System (INIS)
Sawan, M.E.; Cheng, E.T.
1985-01-01
Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li/sub 17/Pb/sub 83/ and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the TBR to group structure and weighting spectrum increases and Li enrichment decrease with up to 20% discrepancies for thin natural Li/sub 17/Pb/sub 83/ blankets
Benchmark calculations for fusion blanket development
International Nuclear Information System (INIS)
Sawan, M.L.; Cheng, E.T.
1986-01-01
Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li 17 Pb 83 and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the tritium breeding ratio to group structure and weighting spectrum increases as the thickness and Li enrichment decrease with up to 20% discrepancies for thin natural Li 17 Pb 83 blankets. (author)
Benchmark testing calculations for 232Th
International Nuclear Information System (INIS)
Liu Ping
2003-01-01
The cross sections of 232 Th from CNDC and JENDL-3.3 were processed with NJOY97.45 code in the ACE format for the continuous-energy Monte Carlo Code MCNP4C. The K eff values and central reaction rates based on CENDL-3.0, JENDL-3.3 and ENDF/B-6.2 were calculated using MCNP4C code for benchmark assembly, and the comparisons with experimental results are given. (author)
FENDL-2 and associated benchmark calculations
International Nuclear Information System (INIS)
Pashchenko, A.B.; Muir, D.W.
1992-03-01
The present Report contains the Summary of the IAEA Advisory Group Meeting on ''The FENDL-2 and Associated Benchmark Calculations'' convened on 18-22 November 1991, at the IAEA Headquarters in Vienna, Austria, by the IAEA Nuclear Data Section. The Advisory Group Meeting Conclusions and Recommendations and the Report on the Strategy for the Future Development of the FENDL and on Future Work towards establishing FENDL-2 are also included in this Summary Report. (author). 1 ref., 4 tabs
ICSBEP-2007, International Criticality Safety Benchmark Experiment Handbook
International Nuclear Information System (INIS)
Blair Briggs, J.
2007-01-01
1 - Description: The Critically Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United Sates Department of Energy. The project quickly became an international effort as scientist from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) is now an official activity of the Organization of Economic Cooperation and Development - Nuclear Energy Agency (OECD-NEA). This handbook contains criticality safety benchmark specifications that have been derived from experiments that were performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material. The example calculations presented do not constitute a validation of the codes or cross section data. The work of the ICSBEP is documented as an International Handbook of Evaluated Criticality Safety Benchmark Experiments. Currently, the handbook spans over 42,000 pages and contains 464 evaluations representing 4,092 critical, near-critical, or subcritical configurations and 21 criticality alarm placement/shielding configurations with multiple dose points for each and 46 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. The handbook is intended for use by criticality safety analysts to perform necessary validations of their calculational techniques and is expected to be a valuable tool for decades to come. The ICSBEP Handbook is available on DVD. You may request a DVD by completing the DVD Request Form on the internet. Access to the Handbook on the Internet requires a password. You may request a password by completing the Password Request Form. The Web address is: http://icsbep.inel.gov/handbook.shtml 2 - Method of solution: Experiments that are found
COVE 2A Benchmarking calculations using NORIA
International Nuclear Information System (INIS)
Carrigan, C.R.; Bixler, N.E.; Hopkins, P.L.; Eaton, R.R.
1991-10-01
Six steady-state and six transient benchmarking calculations have been performed, using the finite element code NORIA, to simulate one-dimensional infiltration into Yucca Mountain. These calculations were made to support the code verification (COVE 2A) activity for the Yucca Mountain Site Characterization Project. COVE 2A evaluates the usefulness of numerical codes for analyzing the hydrology of the potential Yucca Mountain site. Numerical solutions for all cases were found to be stable. As expected, the difficulties and computer-time requirements associated with obtaining solutions increased with infiltration rate. 10 refs., 128 figs., 5 tabs
Benchmark calculation programme concerning typical LMFBR structures
International Nuclear Information System (INIS)
Donea, J.; Ferrari, G.; Grossetie, J.C.; Terzaghi, A.
1982-01-01
This programme, which is part of a comprehensive activity aimed at resolving difficulties encountered in using design procedures based on ASME Code Case N-47, should allow to get confidence in computer codes which are supposed to provide a realistic prediction of the LMFBR component behaviour. The calculations started on static analysis of typical structures made of non linear materials stressed by cyclic loads. The fluid structure interaction analysis is also being considered. Reasons and details of the different benchmark calculations are described, results obtained are commented and future computational exercise indicated
Benchmark calculations for VENUS-2 MOX -fueled reactor dosimetry
International Nuclear Information System (INIS)
Kim, Jong Kung; Kim, Hong Chul; Shin, Chang Ho; Han, Chi Young; Na, Byung Chan
2004-01-01
As a part of a Nuclear Energy Agency (NEA) Project, it was pursued the benchmark for dosimetry calculation of the VENUS-2 MOX-fueled reactor. In this benchmark, the goal is to test the current state-of-the-art computational methods of calculating neutron flux to reactor components against the measured data of the VENUS-2 MOX-fuelled critical experiments. The measured data to be used for this benchmark are the equivalent fission fluxes which are the reaction rates divided by the U 235 fission spectrum averaged cross-section of the corresponding dosimeter. The present benchmark is, therefore, defined to calculate reaction rates and corresponding equivalent fission fluxes measured on the core-mid plane at specific positions outside the core of the VENUS-2 MOX-fuelled reactor. This is a follow-up exercise to the previously completed UO 2 -fuelled VENUS-1 two-dimensional and VENUS-3 three-dimensional exercises. The use of MOX fuel in LWRs presents different neutron characteristics and this is the main interest of the current benchmark compared to the previous ones
International handbook of evaluated criticality safety benchmark experiments
International Nuclear Information System (INIS)
2010-01-01
The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Organization for Economic Cooperation and Development - Nuclear Energy Agency (OECD-NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirement and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span over 55,000 pages and contain 516 evaluations with benchmark specifications for 4,405 critical, near critical, or subcritical configurations, 24 criticality alarm placement / shielding configurations with multiple dose points for each, and 200 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these evaluations; however, benchmark specifications are not derived for such experiments (in some cases models are provided in an appendix). Approximately 770 experimental configurations are categorized as unacceptable for use as criticality safety benchmark experiments. Additional evaluations are in progress and will be
Compilation report of VHTRC temperature coefficient benchmark calculations
Energy Technology Data Exchange (ETDEWEB)
Yasuda, Hideshi; Yamane, Tsuyoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1995-11-01
A calculational benchmark problem has been proposed by JAERI to an IAEA Coordinated Research Program, `Verification of Safety Related Neutronic Calculation for Low-enriched Gas-cooled Reactors` to investigate the accuracy of calculation results obtained by using codes of the participating countries. This benchmark is made on the basis of assembly heating experiments at a pin-in block type critical assembly, VHTRC. Requested calculation items are the cell parameters, effective multiplication factor, temperature coefficient of reactivity, reaction rates, fission rate distribution, etc. Seven institutions from five countries have joined the benchmark works. Calculation results are summarized in this report with some remarks by the authors. Each institute analyzed the problem by applying the calculation code system which was prepared for the HTGR development of individual country. The values of the most important parameter, k{sub eff}, by all institutes showed good agreement with each other and with the experimental ones within 1%. The temperature coefficient agreed within 13%. The values of several cell parameters calculated by several institutes did not agree with the other`s ones. It will be necessary to check the calculation conditions again for getting better agreement. (J.P.N.).
Criticality benchmark comparisons leading to cross-section upgrades
International Nuclear Information System (INIS)
Alesso, H.P.; Annese, C.E.; Heinrichs, D.P.; Lloyd, W.R.; Lent, E.M.
1993-01-01
For several years criticality benchmark calculations with COG. COG is a point-wise Monte Carlo code developed at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The principle consideration in developing COG was that the resulting calculation would be as accurate as the point-wise cross-sectional data, since no physics computational approximations were used. The objective of this paper is to report on COG results for criticality benchmark experiments in concert with MCNP comparisons which are resulting in corrections an upgrades to the point-wise ENDL cross-section data libraries. Benchmarking discrepancies reported here indicated difficulties in the Evaluated Nuclear Data Livermore (ENDL) cross-sections for U-238 at thermal neutron energy levels. This led to a re-evaluation and selection of the appropriate cross-section values from several cross-section sets available (ENDL, ENDF/B-V). Further cross-section upgrades anticipated
Heavy nucleus resonant absorption calculation benchmarks
International Nuclear Information System (INIS)
Tellier, H.; Coste, H.; Raepsaet, C.; Van der Gucht, C.
1993-01-01
The calculation of the space and energy dependence of the heavy nucleus resonant absorption in a heterogeneous lattice is one of the hardest tasks in reactor physics. Because of the computer time and memory needed, it is impossible to represent finely the cross-section behavior in the resonance energy range for everyday computations. Consequently, reactor physicists use a simplified formalism, the self-shielding formalism. As no clean and detailed experimental results are available to validate the self-shielding calculations, Monte Carlo computations are used as a reference. These results, which were obtained with the TRIPOLI continuous-energy Monte Carlo code, constitute a set of numerical benchmarks than can be used to evaluate the accuracy of the techniques or formalisms that are included in any reactor physics codes. Examples of such evaluations, for the new assembly code APOLLO2 and the slowing-down code SECOL, are given for cases of 238 U and 232 Th fuel elements
FENDL neutronics benchmark: Specifications for the calculational neutronics and shielding benchmark
International Nuclear Information System (INIS)
Sawan, M.E.
1994-12-01
During the IAEA Advisory Group Meeting on ''Improved Evaluations and Integral Data Testing for FENDL'' held in Garching near Munich, Germany in the period 12-16 September 1994, the Working Group II on ''Experimental and Calculational Benchmarks on Fusion Neutronics for ITER'' recommended that a calculational benchmark representative of the ITER design should be developed. This report describes the neutronics and shielding calculational benchmark available for scientists interested in performing analysis for this benchmark. (author)
LCEs for Naval Reactor Benchmark Calculations
International Nuclear Information System (INIS)
W.J. Anderson
1999-01-01
The purpose of this engineering calculation is to document the MCNP4B2LV evaluations of Laboratory Critical Experiments (LCEs) performed as part of the Disposal Criticality Analysis Methodology program. LCE evaluations documented in this report were performed for 22 different cases with varied design parameters. Some of these LCEs (10) are documented in existing references (Ref. 7.1 and 7.2), but were re-run for this calculation file using more neutron histories. The objective of this analysis is to quantify the MCNP4B2LV code system's ability to accurately calculate the effective neutron multiplication factor (k eff ) for various critical configurations. These LCE evaluations support the development and validation of the neutronics methodology used for criticality analyses involving Naval reactor spent nuclear fuel in a geologic repository
HELIOS calculations for UO2 lattice benchmarks
International Nuclear Information System (INIS)
Mosteller, R.D.
1998-01-01
Calculations for the ANS UO 2 lattice benchmark have been performed with the HELIOS lattice-physics code and six of its cross-section libraries. The results obtained from the different libraries permit conclusions to be drawn regarding the adequacy of the energy group structures and of the ENDF/B-VI evaluation for 238 U. Scandpower A/S, the developer of HELIOS, provided Los Alamos National Laboratory with six different cross section libraries. Three of the libraries were derived directly from Release 3 of ENDF/B-VI (ENDF/B-VI.3) and differ only in the number of groups (34, 89 or 190). The other three libraries are identical to the first three except for a modification to the cross sections for 238 U in the resonance range
Energy Technology Data Exchange (ETDEWEB)
J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Sartori
2009-09-01
High-quality integral benchmark experiments have always been a priority for criticality safety. However, interest in integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of future criticality safety needs to support next generation reactor and advanced fuel cycle concepts. The importance of drawing upon existing benchmark data is becoming more apparent because of dwindling availability of critical facilities worldwide and the high cost of performing new experiments. Integral benchmark data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the International Handbook of Reactor Physics Benchmark Experiments are widely used. Benchmark data have been added to these two handbooks since the last Nuclear Criticality Safety Division Topical Meeting in Knoxville, Tennessee (September 2005). This paper highlights these additions.
International Nuclear Information System (INIS)
Briggs, J. B.; Scott, L.; Rugama, Y.; Sartori, E.
2009-01-01
High-quality integral benchmark experiments have always been a priority for criticality safety. However, interest in integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of future criticality safety needs to support next generation reactor and advanced fuel cycle concepts. The importance of drawing upon existing benchmark data is becoming more apparent because of dwindling availability of critical facilities worldwide and the high cost of performing new experiments. Integral benchmark data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the International Handbook of Reactor Physics Benchmark Experiments are widely used. Benchmark data have been added to these two handbooks since the last Nuclear Criticality Safety Division Topical Meeting in Knoxville, Tennessee (September 2005). This paper highlights these additions. (authors)
International Nuclear Information System (INIS)
Briggs, J. Blair; Scott, Lori; Rugama, Yolanda; Sartori, Enrico
2009-01-01
High-quality integral benchmark experiments have always been a priority for criticality safety. However, interest in integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of future criticality safety needs to support next generation reactor and advanced fuel cycle concepts. The importance of drawing upon existing benchmark data is becoming more apparent because of dwindling availability of critical facilities worldwide and the high cost of performing new experiments. Integral benchmark data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the International Handbook of Reactor Physics Benchmark Experiments are widely used. Benchmark data have been added to these two handbooks since the last Nuclear Criticality Safety Division Topical Meeting in Knoxville, Tennessee (September 2005). This paper highlights these additions.
Criticality Benchmark Results Using Various MCNP Data Libraries
International Nuclear Information System (INIS)
Frankle, Stephanie C.
1999-01-01
A suite of 86 criticality benchmarks has been recently implemented in MCNPtrademark as part of the nuclear data validation effort. These benchmarks have been run using two sets of MCNP continuous-energy neutron data: ENDF/B-VI based data through Release 2 (ENDF60) and the ENDF/B-V based data. New evaluations were completed for ENDF/B-VI for a number of the important nuclides such as the isotopes of H, Be, C, N, O, Fe, Ni, 235,238 U, 237 Np, and 239,240 Pu. When examining the results of these calculations for the five manor categories of 233 U, intermediate-enriched 235 U (IEU), highly enriched 235 U (HEU), 239 Pu, and mixed metal assembles, we find the following: (1) The new evaluations for 9 Be, 12 C, and 14 N show no net effect on k eff ; (2) There is a consistent decrease in k eff for all of the solution assemblies for ENDF/B-VI due to 1 H and 16 O, moving k eff further from the benchmark value for uranium solutions and closer to the benchmark value for plutonium solutions; (3) k eff decreased for the ENDF/B-VI Fe isotopic data, moving the calculated k eff further from the benchmark value; (4) k eff decreased for the ENDF/B-VI Ni isotopic data, moving the calculated k eff closer to the benchmark value; (5) The W data remained unchanged and tended to calculate slightly higher than the benchmark values; (6) For metal uranium systems, the ENDF/B-VI data for 235 U tends to decrease k eff while the 238 U data tends to increase k eff . The net result depends on the energy spectrum and material specifications for the particular assembly; (7) For more intermediate-energy systems, the changes in the 235,238 U evaluations tend to increase k eff . For the mixed graphite and normal uranium-reflected assembly, a large increase in k eff due to changes in the 238 U evaluation moved the calculated k eff much closer to the benchmark value. (8) There is little change in k eff for the uranium solutions due to the new 235,238 U evaluations; and (9) There is little change in k eff
Standard Guide for Benchmark Testing of Light Water Reactor Calculations
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...
International Nuclear Information System (INIS)
Dos Santos, Adimir; Siqueira, Paulo de Tarso D.; Andrade e Silva, Graciete Simões; Grant, Carlos; Tarazaga, Ariel E.; Barberis, Claudia
2013-01-01
In year 2008 the Atomic Energy National Commission (CNEA) of Argentina, and the Brazilian Institute of Energetic and Nuclear Research (IPEN), under the frame of Nuclear Energy Argentine Brazilian Agreement (COBEN), among many others, included the project “Validation and Verification of Calculation Methods used for Research and Experimental Reactors . At this time, it was established that the validation was to be performed with models implemented in the deterministic codes HUEMUL and PUMA (cell and reactor codes) developed by CNEA and those ones implemented in MCNP by CNEA and IPEN. The necessary data for these validations would correspond to theoretical-experimental reference cases in the research reactor IPEN/MB-01 located in São Paulo, Brazil. The staff of the group Reactor and Nuclear Power Studies (SERC) of CNEA, from the argentine side, performed calculations with deterministic models (HUEMUL-PUMA) and probabilistic methods (MCNP) modeling a great number of physical situations of de reactor, which previously have been studied and modeled by members of the Center of Nuclear Engineering of the IPEN, whose results were extensively provided to CNEA. In this paper results for critical configurations are shown. (author)
Benchmark calculation of nuclear design code for HCLWR
International Nuclear Information System (INIS)
Suzuki, Katsuo; Saji, Etsuro; Gakuhari, Kazuhiko; Akie, Hiroshi; Takano, Hideki; Ishiguro, Yukio.
1986-01-01
In the calculation of the lattice cell for High Conversion Light Water Reactors, big differences of nuclear design parameters appear between the results obtained by various methods and nuclear data libraries. The validity of the calculation can be verified by the critical experiment. The benchmark calculation is also efficient for the estimation of the validity in wide range of lattice parameters and burnup. As we do not have many measured data. The benchmark calculations were done by JAERI and MAPI, using SRAC and WIMS-E respectively. The problem covered the wide range of lattice parameters, i.e., from tight lattice to the current PWR lattice. The comparison was made on the effective multiplication factor, conversion ratio, and reaction rate of each nuclide, including burnup and void effects. The difference of the result is largest at the tightest lattice. But even at that lattice, the difference of the effective multiplication factor is only 1.4 %. The main cause of the difference is the neutron absorption rate U-238 in resonance energy region. The difference of other nuclear design parameters and their cause were also grasped. (author)
Monte Carlo code criticality benchmark comparisons for waste packaging
International Nuclear Information System (INIS)
Alesso, H.P.; Annese, C.E.; Buck, R.M.; Pearson, J.S.; Lloyd, W.R.
1992-07-01
COG is a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The objective of this paper is to report on COG results for criticality benchmark experiments both on a Cray mainframe and on a HP 9000 workstation. COG has been recently ported to workstations to improve its accessibility to a wider community of users. COG has some similarities to a number of other computer codes used in the shielding and criticality community. The recently introduced high performance reduced instruction set (RISC) UNIX workstations provide computational power that approach mainframes at a fraction of the cost. A version of COG is currently being developed for the Hewlett Packard 9000/730 computer with a UNIX operating system. Subsequent porting operations will move COG to SUN, DEC, and IBM workstations. In addition, a CAD system for preparation of the geometry input for COG is being developed. In July 1977, Babcock ampersand Wilcox Co. (B ampersand W) was awarded a contract to conduct a series of critical experiments that simulated close-packed storage of LWR-type fuel. These experiments provided data for benchmarking and validating calculational methods used in predicting K-effective of nuclear fuel storage in close-packed, neutron poisoned arrays. Low enriched UO2 fuel pins in water-moderated lattices in fuel storage represent a challenging criticality calculation for Monte Carlo codes particularly when the fuel pins extend out of the water. COG and KENO calculational results of these criticality benchmark experiments are presented
Calculation of the 5th AER dynamic benchmark with APROS
International Nuclear Information System (INIS)
Puska, E.K.; Kontio, H.
1998-01-01
The model used for calculation of the 5th AER dynamic benchmark with APROS code is presented. In the calculation of the 5th AER dynamic benchmark the three-dimensional neutronics model of APROS was used. The core was divided axially into 20 nodes according to the specifications of the benchmark and each six identical fuel assemblies were placed into one one-dimensional thermal hydraulic channel. The five-equation thermal hydraulic model was used in the benchmark. The plant process and automation was described with a generic VVER-440 plant model created by IVO PE. (author)
Criticality safety benchmarking of PASC-3 and ECNJEF1.1
International Nuclear Information System (INIS)
Li, J.
1992-09-01
To validate the code system PASC-3 and the multigroup cross section library ECNJEF1.1 on various applications many benchmarks are required. This report presents the results of critically safety benchmarking for five calculational and four experimental benchmarks. These benchmarks are related to the transport package of fissile materials such as spent fuel. The fissile nuclides in these benchmarks are 235 U and 239 Pu. The modules of PASC-3 which have been used for the calculations are BONAMI, NITAWL and KENO.5A. The final results for the experimental benchmarks do agree well with experimental data. For the calculational benchmarks the results presented here are in reasonable agreement with the results from other investigations. (author). 8 refs.; 20 figs.; 5 tabs
Benchmark assemblies of the Los Alamos Critical Assemblies Facility
International Nuclear Information System (INIS)
Dowdy, E.J.
1985-01-01
Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described
Benchmark assemblies of the Los Alamos critical assemblies facility
International Nuclear Information System (INIS)
Dowdy, E.J.
1986-01-01
Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described. (author)
Benchmark assemblies of the Los Alamos critical assemblies facility
International Nuclear Information System (INIS)
Dowdy, E.J.
1985-01-01
Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described
International Nuclear Information System (INIS)
Bouhaddane, A.; Farkas, G.; Hascik, J.; Slugen, V.
2015-01-01
The paper presents verification of selected nuclear data libraries with the aim to apply them to fast reactor calculations. More precise results were achieved for thermal neutrons calculations. This corresponds with the demand for more precise nuclear data for fast reactors. However, fast neutron calculations show some consistency, in particular between ENDF-B/VII.1 and JENDL-4.0 nuclear data libraries. The results support the idea to prefer using newer ENDF-B/VII.1 instead of the previous version ENDF-B/VII.0. Certainly, there are still some issues to be addressed and there is potential to gain more conclusive results. Although, application of ENDF-B/VII.1 and JENDL-4.0 is expected for further calculations. (authors)
International Nuclear Information System (INIS)
Lara, Rafael G.; Maiorino, Jose R.
2013-01-01
This work aimed at the implementation and qualification of MCNP code in a supercomputer of the Universidade Federal do ABC, so that may be available a next-generation simulation tool for precise calculations of nuclear reactors and systems subject to radiation. The implementation of this tool will have multidisciplinary applications, covering various areas of engineering (nuclear, aerospace, biomedical), radiation physics and others
MCNP and OMEGA criticality calculations
International Nuclear Information System (INIS)
Seifert, E.
1998-04-01
The reliability of OMEGA criticality calculations is shown by a comparison with calculations by the validated and widely used Monte Carlo code MCNP. The criticality of 16 assemblies with uranium as fissionable is calculated with the codes MCNP (Version 4A, ENDF/B-V cross sections), MCNP (Version 4B, ENDF/B-VI cross sections), and OMEGA. Identical calculation models are used for the three codes. The results are compared mutually and with the experimental criticality of the assemblies. (orig.)
BN-600 Phase III benchmark calculations
International Nuclear Information System (INIS)
Hill, R.N.; Grimm, K.N.
2002-01-01
Calculations for a Hexagonal-Z model of the BN-600 reactor with a partial mixed oxide loading, based on a joint IPPE/OBMK loading configuration that contained three uranium enrichment zones and one plutonium enrichment zone in the core, have been performed at ANL. Control-rod worths and reactivity feedback coefficients were calculated using both homogeneous and heterogeneous models. These values were calculated with either first-order perturbation theory methods (Triangle-Z geometry), nodal eigenvalue differences (Hexagonal-Z geometry), or Monte Carlo eigenvalue differences. Both spatially-dependent and region integrated values are shown
IRIS core criticality calculations
International Nuclear Information System (INIS)
Jecmenica, R.; Trontl, K.; Pevec, D.; Grgic, D.
2003-01-01
Three-dimensional Monte Carlo computer code KENO-VI of CSAS26 sequence of SCALE-4.4 code system was applied for pin-by-pin calculations of the effective multiplication factor for the first cycle IRIS reactor core. The effective multiplication factors obtained by the above mentioned Monte Carlo calculations using 27-group ENDF/B-IV library and 238-group ENDF/B-V library have been compared with the effective multiplication factors achieved by HELIOS/NESTLE, CASMO/SIMULATE, and modified CORD-2 nodal calculations. The results of Monte Carlo calculations are found to be in good agreement with the results obtained by the nodal codes. The discrepancies in effective multiplication factor are typically within 1%. (author)
Impact of the 235U Covariance Data in Benchmark Calculations
International Nuclear Information System (INIS)
Leal, Luiz C.; Mueller, D.; Arbanas, G.; Wiarda, D.; Derrien, H.
2008-01-01
The error estimation for calculated quantities relies on nuclear data uncertainty information available in the basic nuclear data libraries such as the U.S. Evaluated Nuclear Data File (ENDF/B). The uncertainty files (covariance matrices) in the ENDF/B library are generally obtained from analysis of experimental data. In the resonance region, the computer code SAMMY is used for analyses of experimental data and generation of resonance parameters. In addition to resonance parameters evaluation, SAMMY also generates resonance parameter covariance matrices (RPCM). SAMMY uses the generalized least-squares formalism (Bayes method) together with the resonance formalism (R-matrix theory) for analysis of experimental data. Two approaches are available for creation of resonance-parameter covariance data. (1) During the data-evaluation process, SAMMY generates both a set of resonance parameters that fit the experimental data and the associated resonance-parameter covariance matrix. (2) For existing resonance-parameter evaluations for which no resonance-parameter covariance data are available, SAMMY can retroactively create a resonance-parameter covariance matrix. The retroactive method was used to generate covariance data for 235U. The resulting 235U covariance matrix was then used as input to the PUFF-IV code, which processed the covariance data into multigroup form, and to the TSUNAMI code, which calculated the uncertainty in the multiplication factor due to uncertainty in the experimental cross sections. The objective of this work is to demonstrate the use of the 235U covariance data in calculations of critical benchmark systems
WIPP Benchmark calculations with the large strain SPECTROM codes
International Nuclear Information System (INIS)
Callahan, G.D.; DeVries, K.L.
1995-08-01
This report provides calculational results from the updated Lagrangian structural finite-element programs SPECTROM-32 and SPECTROM-333 for the purpose of qualifying these codes to perform analyses of structural situations in the Waste Isolation Pilot Plant (WIPP). Results are presented for the Second WIPP Benchmark (Benchmark II) Problems and for a simplified heated room problem used in a parallel design calculation study. The Benchmark II problems consist of an isothermal room problem and a heated room problem. The stratigraphy involves 27 distinct geologic layers including ten clay seams of which four are modeled as frictionless sliding interfaces. The analyses of the Benchmark II problems consider a 10-year simulation period. The evaluation of nine structural codes used in the Benchmark II problems shows that inclusion of finite-strain effects is not as significant as observed for the simplified heated room problem, and a variety of finite-strain and small-strain formulations produced similar results. The simplified heated room problem provides stratigraphic complexity equivalent to the Benchmark II problems but neglects sliding along the clay seams. The simplified heated problem does, however, provide a calculational check case where the small strain-formulation produced room closures about 20 percent greater than those obtained using finite-strain formulations. A discussion is given of each of the solved problems, and the computational results are compared with available published results. In general, the results of the two SPECTROM large strain codes compare favorably with results from other codes used to solve the problems
International Nuclear Information System (INIS)
2013-01-01
The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various nuclear critical experiment facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirement and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span nearly 66,000 pages and contain 558 evaluations with benchmark specifications for 4,798 critical, near critical or subcritical configurations, 24 criticality alarm placement/shielding configurations with multiple dose points for each and 200 configurations that have been categorised as fundamental physics measurements that are relevant to criticality safety applications. New to the Handbook are benchmark specifications for Critical, Bare, HEU(93.2)- Metal Sphere experiments referred to as ORSphere that were performed by a team of experimenters at Oak Ridge National Laboratory in the early 1970's. A photograph of this assembly is shown on the front cover
Benchmarks for evaluation of shielding calculations
International Nuclear Information System (INIS)
Coelho, P.R.P.; Maiorino, J.R.
1989-01-01
The spatial-energy neutron distribution emerging from a laminated shielding (stainless, polyethylene and lead) were measured by a fast neutron spectrometer and some experimental results were compared with those calculated by a network of codes. The source neutrons incident in the shielding were 14 MeV neutrons from a H-3(d,n)He-4 reaction coming from a Van de Graaff accelerator. Experimentally was verified a good radial symmetry of neutron energy-spectrum, and also a moderation and attenuation effect for points located out of the central axis of symmetry. These results indicate that the experiment can be well modelated by R-Z geometry. A neutron-energy spectra calculated by DOT 3.5 was compared with the measured spectra, showing a good agreement in the shape and value of the spectra (12% for an integrated spectrum from 2 to 16 MeV). (author) [pt
Method of characteristics - Based sensitivity calculations for international PWR benchmark
International Nuclear Information System (INIS)
Suslov, I. R.; Tormyshev, I. V.; Komlev, O. G.
2013-01-01
Method to calculate sensitivity of fractional-linear neutron flux functionals to transport equation coefficients is proposed. Implementation of the method on the basis of MOC code MCCG3D is developed. Sensitivity calculations for fission intensity for international PWR benchmark are performed. (authors)
Benchmark density functional theory calculations for nanoscale conductance
DEFF Research Database (Denmark)
Strange, Mikkel; Bækgaard, Iben Sig Buur; Thygesen, Kristian Sommer
2008-01-01
We present a set of benchmark calculations for the Kohn-Sham elastic transmission function of five representative single-molecule junctions. The transmission functions are calculated using two different density functional theory methods, namely an ultrasoft pseudopotential plane-wave code...
The ORSphere Benchmark Evaluation and Its Potential Impact on Nuclear Criticality Safety
Energy Technology Data Exchange (ETDEWEB)
John D. Bess; Margaret A. Marshall; J. Blair Briggs
2013-10-01
In the early 1970’s, critical experiments using an unreflected metal sphere of highly enriched uranium (HEU) were performed with the focus to provide a “very accurate description…as an ideal benchmark for calculational methods and cross-section data files.” Two near-critical configurations of the Oak Ridge Sphere (ORSphere) were evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook). The results from those benchmark experiments were then compared with additional unmoderated and unreflected HEU metal benchmark experiment configurations currently found in the ICSBEP Handbook. For basic geometries (spheres, cylinders, and slabs) the eigenvalues calculated using MCNP5 and ENDF/B-VII.0 were within 3 of their respective benchmark values. There appears to be generally a good agreement between calculated and benchmark values for spherical and slab geometry systems. Cylindrical geometry configurations tended to calculate low, including more complex bare HEU metal systems containing cylinders. The ORSphere experiments do not calculate within their 1s uncertainty and there is a possibility that the effect of the measured uncertainties for the GODIVA I benchmark may need reevaluated. There is significant scatter in the calculations for the highly-correlated ORCEF cylinder experiments, which are constructed from close-fitting HEU discs and annuli. Selection of a nuclear data library can have a larger impact on calculated eigenvalue results than the variation found within calculations of a given experimental series, such as the ORCEF cylinders, using a single nuclear data set.
Benchmarking of HEU mental annuli critical assemblies with internally reflected graphite cylinder
Directory of Open Access Journals (Sweden)
Xiaobo Liu
2017-01-01
Full Text Available Three experimental configurations of critical assemblies, performed in 1963 at the Oak Ridge Critical Experiment Facility, which are assembled using three different diameter HEU annuli (15-9 inches, 15-7 inches and 13-7 inches metal annuli with internally reflected graphite cylinder are evaluated and benchmarked. The experimental uncertainties which are 0.00057, 0.00058 and 0.00057 respectively, and biases to the benchmark models which are − 0.00286, − 0.00242 and − 0.00168 respectively, were determined, and the experimental benchmark keff results were obtained for both detailed and simplified models. The calculation results for both detailed and simplified models using MCNP6-1.0 and ENDF/B-VII.1 agree well to the benchmark experimental results within difference less than 0.2%. The benchmarking results were accepted for the inclusion of ICSBEP Handbook.
Effects of neutron data libraries and criticality codes on IAEA criticality benchmark problems
International Nuclear Information System (INIS)
Sarker, Md.M.; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka
1993-10-01
In order to compare the effects of neutron data libraries and criticality codes to thermal reactors (LWR), the IAEA criticality benchmark calculations have been performed. The experiments selected in this study include TRX-1 and TRX-2 with a simple geometric configuration. Reactor lattice calculation codes WIMS-D/4, MCNP-4, JACS (MGCL, KENO), and SRAC were used in the present calculations. The TRX cores were analyzed by WIMS-D/4 using WIMS original library and also by MCNP-4, JACS (MGCL, KENO), and SRAC using the libraries generated from JENDL-3 and ENDF/B-IV nuclear data files. An intercomparison work for the above mentioned code systems and cross section libraries was performed by analyzing the LWR benchmark experiments TRX-1 and TRX-2. The TRX cores were also analyzed for supercritical and subcritical conditions and these results were compared. In the case of critical condition, the results were in good agreement. But for the supercritical and subcritical conditions, the difference of the results obtained by using the different cross section libraries become larger than for the critical condition. (author)
Critical power prediction by CATHARE2 of the OECD/NRC BFBT benchmark
Energy Technology Data Exchange (ETDEWEB)
Lutsanych, Sergii, E-mail: s.lutsanych@ing.unipi.it [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, 56122, San Piero a Grado, Pisa (Italy); Sabotinov, Luben, E-mail: luben.sabotinov@irsn.fr [Institut for Radiological Protection and Nuclear Safety (IRSN), 31 avenue de la Division Leclerc, 92262 Fontenay-aux-Roses (France); D’Auria, Francesco, E-mail: francesco.dauria@dimnp.unipi.it [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, 56122, San Piero a Grado, Pisa (Italy)
2015-03-15
Highlights: • We used CATHARE code to calculate the critical power exercises of the OECD/NRC BFBT benchmark. • We considered both steady-state and transient critical power tests of the benchmark. • We used both the 1D and 3D features of the CATHARE code to simulate the experiments. • Acceptable prediction of the critical power and its location in the bundle is obtained using appropriate modelling. - Abstract: This paper presents an application of the French best estimate thermal-hydraulic code CATHARE 2 to calculate the critical power and departure from nucleate boiling (DNB) exercises of the International OECD/NRC BWR Fuel Bundle Test (BFBT) benchmark. The assessment activity is performed comparing the code calculation results with available in the framework of the benchmark experimental data from Japanese Nuclear Power Engineering Corporation (NUPEC). Two-phase flow calculations on prediction of the critical power have been carried out both in steady state and transient cases, using one-dimensional and three-dimensional modelling. Results of the steady-state critical power tests calculation have shown the ability of CATHARE code to predict reasonably the critical power and its location, using appropriate modelling.
Calculus of a reactor VVER-1000 benchmark; Calcul d'un benchmark de reacteur VVER-1000
Energy Technology Data Exchange (ETDEWEB)
Dourougie, C
1998-07-01
In the framework of the FMDP (Fissile Materials Disposition Program between the US and Russian, a benchmark was tested. The pin cells contain low enriched uranium (LEU) and mixed oxide fuels (MOX). The calculations are done for a wide range of temperatures and solute boron concentrations, in accidental conditions. (A.L.B.)
Benchmark Calculations of Noncovalent Interactions of Halogenated Molecules
Czech Academy of Sciences Publication Activity Database
Řezáč, Jan; Riley, Kevin Eugene; Hobza, Pavel
2012-01-01
Roč. 8, č. 11 (2012), s. 4285-4292 ISSN 1549-9618 R&D Projects: GA ČR GBP208/12/G016 Institutional support: RVO:61388963 Keywords : halogenated molecules * noncovalent interactions * benchmark calculations Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 5.389, year: 2012
Impact of the 235U covariance data in benchmark calculations
International Nuclear Information System (INIS)
Leal, Luiz; Mueller, Don; Arbanas, Goran; Wiarda, Dorothea; Derrien, Herve
2008-01-01
The error estimation for calculated quantities relies on nuclear data uncertainty information available in the basic nuclear data libraries such as the U.S. Evaluated Nuclear Data File (ENDF/B). The uncertainty files (covariance matrices) in the ENDF/B library are generally obtained from analysis of experimental data. In the resonance region, the computer code SAMMY is used for analyses of experimental data and generation of resonance parameters. In addition to resonance parameters evaluation, SAMMY also generates resonance parameter covariance matrices (RPCM). SAMMY uses the generalized least-squares formalism (Bayes' method) together with the resonance formalism (R-matrix theory) for analysis of experimental data. Two approaches are available for creation of resonance-parameter covariance data. (1) During the data-evaluation process, SAMMY generates both a set of resonance parameters that fit the experimental data and the associated resonance-parameter covariance matrix. (2) For existing resonance-parameter evaluations for which no resonance-parameter covariance data are available, SAMMY can retroactively create a resonance-parameter covariance matrix. The retroactive method was used to generate covariance data for 235 U. The resulting 235 U covariance matrix was then used as input to the PUFF-IV code, which processed the covariance data into multigroup form, and to the TSUNAMI code, which calculated the uncertainty in the multiplication factor due to uncertainty in the experimental cross sections. The objective of this work is to demonstrate the use of the 235 U covariance data in calculations of critical benchmark systems. (authors)
International Nuclear Information System (INIS)
J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama
2008-01-01
Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR-06 are highlighted, and the future of the two projects is discussed
Energy Technology Data Exchange (ETDEWEB)
J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama
2008-09-01
Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.
Benchmark calculations of thermal reaction rates. I - Quantal scattering theory
Chatfield, David C.; Truhlar, Donald G.; Schwenke, David W.
1991-01-01
The thermal rate coefficient for the prototype reaction H + H2 yields H2 + H with zero total angular momentum is calculated by summing, averaging, and numerically integrating state-to-state reaction probabilities calculated by time-independent quantum-mechanical scattering theory. The results are very carefully converged with respect to all numerical parameters in order to provide high-precision benchmark results for confirming the accuracy of new methods and testing their efficiency.
Providing Nuclear Criticality Safety Analysis Education through Benchmark Experiment Evaluation
International Nuclear Information System (INIS)
Bess, John D.; Briggs, J. Blair; Nigg, David W.
2009-01-01
One of the challenges that today's new workforce of nuclear criticality safety engineers face is the opportunity to provide assessment of nuclear systems and establish safety guidelines without having received significant experience or hands-on training prior to graduation. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and/or the International Reactor Physics Experiment Evaluation Project (IRPhEP) provides students and young professionals the opportunity to gain experience and enhance critical engineering skills.
JNC results of BN-600 benchmark calculation (phase 4)
International Nuclear Information System (INIS)
Ishikawa, Makoto
2003-01-01
The present work is the results of JNC, Japan, for the Phase 4 of the BN-600 core benchmark problem (Hex-Z fully MOX fuelled core model) organized by IAEA. The benchmark specification is based on 1) the RCM report of IAEA CRP on 'Updated Codes and Methods to Reduce the Calculational Uncertainties of LMFR Reactivity Effects, Action 3.12' (Calculations for BN-600 fully fuelled MOX core for subsequent transient analyses). JENDL-3.2 nuclear data library was used for calculating 70 group ABBN-type group constants. Cell models for fuel assembly and control rod calculations were applied: homogeneous and heterogeneous (cylindrical supercell) model. Basic diffusion calculation was three-dimensional Hex-Z model, 18 group (Citation code). Transport calculations were 18 group, three-dimensional (NSHEC code) based on Sn-transport nodal method developed at JNC. The generated thermal power per fission was based on Sher's data corrected on the basis of ENDF/B-IV data library. Calculation results are presented in Tables for intercomparison
The fifth AER dynamic benchmark calculation with hextran-smabre
International Nuclear Information System (INIS)
Haemaelaeinen, A.; Kyrki-Rajamaeki, R.
1998-01-01
The first AER benchmark for coupling of the thermohydraulic codes and three-dimensional reactordynamic core models is discussed. HEXTRAN 2.7 is used for the core dynamics and SMABRE 4.6 as a thermohydraulic model for the primary and secondary loops. The plant model for SMABRE is based mainly on two input models, the Loviisa model and standard VVER-440/213 plant model. The primary circuit includes six separate loops, totally 505 nodes and 652 junctions. The reactor pressure vessel is divided into six parallel channels. In HEXTRAN calculation 1/6 symmetry is used in the core. In the calculations nuclear data is based on the ENDF/B-IV library and it has been evaluated with the CASMO-HEX code. The importance of the nuclear data was illustrated by repeating the benchmark calculation with using three different data sets. Optimal extensive data valid from hot to cold conditions were not available for all types of fuel enrichments needed in this benchmark. (author)
International Nuclear Information System (INIS)
Miyoshi, Yoshinori; Yamamoto, Toshihiro; Nakamura, Takemi
2001-01-01
In order to validate the availability of criticality calculation codes and related nuclear data library, a series of fundamental benchmark experiments on low enriched uranyl nitrate solution have been performed with a Static Experiment Criticality Facility, STACY in JAERI. The basic core composed of a single tank with water reflector was used for accumulating the systematic data with well-known experimental uncertainties. This paper presents the outline of the core configurations of STACY, the standard calculation model, and calculation results with a Monte Carlo code and JENDL 3.2 nuclear data library. (author)
Meylianti S., Brigita
1999-01-01
Benchmarking has different meaning to different people. There are five types of benchmarking, namely internal benchmarking, competitive benchmarking, industry / functional benchmarking, process / generic benchmarking and collaborative benchmarking. Each type of benchmarking has its own advantages as well as disadvantages. Therefore it is important to know what kind of benchmarking is suitable to a specific application. This paper will discuss those five types of benchmarking in detail, includ...
JNC results of BN-600 benchmark calculation (phase 3)
International Nuclear Information System (INIS)
Ishikawa, M.
2002-01-01
The present work is the result of phase 3 BN-600 core benchmark problem, meaning burnup and heterogeneity. Analytical method applied consisted of: JENDL-3.2 nuclear data library, group constants (70 group, ABBN type self shielding transport factors), heterogeneous cell model for fuel and control rod, basic diffusion calculation (CITATION code), transport theory and mesh size correction (NSHEX code based on SN transport nodal method developed by JNC). Burnup and heterogeneity calculation results are presented obtained by applying both diffusion and transport approach for beginning and end of cycle
Benchmark calculations on nuclear characteristics of JRR-4 HEU core by SRAC code system
International Nuclear Information System (INIS)
Arigane, Kenji
1987-04-01
The reduced enrichment program for the JRR-4 has been progressing based on JAERI's RERTR (Reduced Enrichment Research and Test Reactor) program. The SRAC (JAERI Thermal Reactor Standard Code System for Reactor Design and Analysis) is used for the neutronic design of the JRR-4 LEU Core. This report describes the benchmark calculations on the neutronic characteristics of the JRR-4 HEU Core in order to validate the calculation method. The benchmark calculations were performed on the various kind of neutronic characteristics such as excess reactivity, criticality, control rod worth, thermal neutron flux distribution, void coefficient, temperature coefficient, mass coefficient, kinetic parameters and poisoning effect by Xe-135 build up. As the result, it was confirmed that these calculated values are in satisfactory agreement with the measured values. Therefore, the calculational method by the SRAC was validated. (author)
Criticality safety benchmark evaluation project: Recovering the past
Energy Technology Data Exchange (ETDEWEB)
Trumble, E.F.
1997-06-01
A very brief summary of the Criticality Safety Benchmark Evaluation Project of the Westinghouse Savannah River Company is provided in this paper. The purpose of the project is to provide a source of evaluated criticality safety experiments in an easily usable format. Another project goal is to search for any experiments that may have been lost or contain discrepancies, and to determine if they can be used. Results of evaluated experiments are being published as US DOE handbooks.
Monte Carlo benchmark calculations for 400MWTH PBMR core
International Nuclear Information System (INIS)
Kim, H. C.; Kim, J. K.; Kim, S. Y.; Noh, J. M.
2007-01-01
A large interest in high-temperature gas-cooled reactors (HTGR) has been initiated in connection with hydrogen production in recent years. In this study, as a part of work for establishing Monte Carlo computation system for HTGR core analysis, some benchmark calculations for pebble-type HTGR were carried out using MCNP5 code. The core of the 400MW t h Pebble-bed Modular Reactor (PBMR) was selected as a benchmark model. Recently, the IAEA CRP5 neutronics and thermal-hydraulics benchmark problem was proposed for the testing of existing methods for HTGRs to analyze the neutronics and thermal-hydraulic behavior for the design and safety evaluations of the PBMR. This study deals with the neutronic benchmark problems, for fresh fuel and cold conditions (Case F-1), and first core loading with given number densities (Case F-2), proposed for PBMR. After the detailed MCNP modeling of the whole facility, benchmark calculations were performed. Spherical fuel region of a fuel pebble is divided into cubic lattice element in order to model a fuel pebble which contains, on average, 15000 CFPs (Coated Fuel Particles). Each element contains one CFP at its center. In this study, the side length of each cubic lattice element to have the same amount of fuel was calculated to be 0.1635 cm. The remaining volume of each lattice element was filled with graphite. All of different 5 concentric shells of CFP were modeled. The PBMR annular core consists of approximately 452000 pebbles in the benchmark problems. In Case F-1 where the core was filled with only fresh fuel pebble, a BCC(body-centered-cubic) lattice model was employed in order to achieve the random packing core with the packing fraction of 0.61. The BCC lattice was also employed with the size of the moderator pebble increased in a manner that reproduces the specified F/M ratio of 1:2 while preserving the packing fraction of 0.61 in Case F-2. The calculations were pursued with ENDF/B-VI cross-section library and used sab2002 S(α,
Benchmark calculations with simple phantom for neutron dosimetry (2)
International Nuclear Information System (INIS)
Yukio, Sakamoto; Shuichi, Tsuda; Tatsuhiko, Sato; Nobuaki, Yoshizawa; Hideo, Hirayama
2004-01-01
Benchmark calculations for high-energy neutron dosimetry were undertaken after SATIF-5. Energy deposition in a cylindrical phantom with 100 cm radius and 30 cm depth was calculated for the irradiation of neutrons from 100 MeV to 10 GeV. Using the ICRU four-element loft tissue phantom and four single-element (hydrogen, carbon, nitrogen and oxygen) phantoms, the depth distributions of deposition energy and those total at the central region of phantoms within l cm radius and at the whole region of phantoms within 100 cm radius were calculated. The calculated results of FLUKA, MCNPX, MARS, HETC-3STEP and NMTC/JAM codes were compared. It was found that FLUKA, MARS and NMTC/JAM showed almost the same results. For the high-energy neutron incident, the MCNP-X results showed the largest ones in the total deposition energy and the HETC-3STEP results show'ed smallest ones. (author)
TRX and UO2 criticality benchmarks with SAM-CE
International Nuclear Information System (INIS)
Beer, M.; Troubetzkoy, E.S.; Lichtenstein, H.; Rose, P.F.
1980-01-01
A set of thermal reactor benchmark calculations with SAM-CE which have been conducted at both MAGI and at BNL are described. Their purpose was both validation of the SAM-CE reactor eigenvalue capability developed by MAGI and a substantial contribution to the data testing of both ENDF/B-IV and ENDF/B-V libraries. This experience also resulted in increased calculational efficiency of the code and an example is given. The benchmark analysis included the TRX-1 infinite cell using both ENDF/B-IV and ENDF/B-V cross section sets and calculations using ENDF/B-IV of the TRX-1 full core and TRX-2 cell. BAPL-UO2-1 calculations were conducted for the cell using both ENDF/B-IV and ENDF/B-V and for the full core with ENDF/B-V
The International Criticality Safety Benchmark Evaluation Project on the Internet
International Nuclear Information System (INIS)
Briggs, J.B.; Brennan, S.A.; Scott, L.
2000-01-01
The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in October 1992 by the US Department of Energy's (DOE's) defense programs and is documented in the Transactions of numerous American Nuclear Society and International Criticality Safety Conferences. The work of the ICSBEP is documented as an Organization for Economic Cooperation and Development (OECD) handbook, International Handbook of Evaluated Criticality Safety Benchmark Experiments. The ICSBEP Internet site was established in 1996 and its address is http://icsbep.inel.gov/icsbep. A copy of the ICSBEP home page is shown in Fig. 1. The ICSBEP Internet site contains the five primary links. Internal sublinks to other relevant sites are also provided within the ICSBEP Internet site. A brief description of each of the five primary ICSBEP Internet site links is given
Validating analysis methodologies used in burnup credit criticality calculations
International Nuclear Information System (INIS)
Brady, M.C.; Napolitano, D.G.
1992-01-01
The concept of allowing reactivity credit for the depleted (or burned) state of pressurized water reactor fuel in the licensing of spent fuel facilities introduces a new challenge to members of the nuclear criticality community. The primary difference in this analysis approach is the technical ability to calculate spent fuel compositions (or inventories) and to predict their effect on the system multiplication factor. Isotopic prediction codes are used routinely for in-core physics calculations and the prediction of radiation source terms for both thermal and shielding analyses, but represent an innovation for criticality specialists. This paper discusses two methodologies currently being developed to specifically evaluate isotopic composition and reactivity for the burnup credit concept. A comprehensive approach to benchmarking and validating the methods is also presented. This approach involves the analysis of commercial reactor critical data, fuel storage critical experiments, chemical assay isotopic data, and numerical benchmark calculations
International Nuclear Information System (INIS)
Yamamoto, Toshihiro; Miyoshi, Yoshinori; Kikuchi, Tsukasa; Watanabe, Shouichi
2002-01-01
The second series of critical experiments with 10% enriched uranyl nitrate solution using 28-cm-thick slab core have been performed with the Static Experiment Critical Facility of the Japan Atomic Energy Research Institute. Systematic critical data were obtained by changing the uranium concentration of the fuel solution from 464 to 300 gU/l under various reflector conditions. In this paper, the thirteen critical configurations for water-reflected cores and unreflected cores are identified and evaluated. The effects of uncertainties in the experimental data on k eff are quantified by sensitivity studies. Benchmark model specifications that are necessary to construct a calculational model are given. The uncertainties of k eff 's included in the benchmark model specifications are approximately 0.1%Δk eff . The thirteen critical configurations are judged to be acceptable benchmark data. Using the benchmark model specifications, sample calculation results are provided with several sets of standard codes and cross section data. (author)
a Proposed Benchmark Problem for Scatter Calculations in Radiographic Modelling
Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.
2009-03-01
Code Validation is a permanent concern in computer modelling, and has been addressed repeatedly in eddy current and ultrasonic modeling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radiographic modeling, the scattered radiation prediction. Many NDT applications can not neglect scattered radiation, and the scatter calculation thus is important to faithfully simulate the inspection situation. Our benchmark problem covers the wall thickness range of 10 to 50 mm for single wall inspections, with energies ranging from 100 to 500 keV in the first stage, and up to 1 MeV with wall thicknesses up to 70 mm in the extended stage. A simple plate geometry is sufficient for this purpose, and the scatter data is compared on a photon level, without a film model, which allows for comparisons with reference codes like MCNP. We compare results of three Monte Carlo codes (McRay, Sindbad and Moderato) as well as an analytical first order scattering code (VXI), and confront them to results obtained with MCNP. The comparison with an analytical scatter model provides insights into the application domain where this kind of approach can successfully replace Monte-Carlo calculations.
Criticality calculations for safety analysis
International Nuclear Information System (INIS)
Vellozo, S.O.
1981-01-01
Criticality studies in uranium nitrate and plutonium nitrate aqueous solutions were done. For uranium compound three basic computer codes are used: GAMTEC-II, DTF-IV, KENO-IV. Water was used as refletor and the results obtained with the different computer codes were analyzed and compared with the 'Handbuck zur Kriticalitat'. The cross sections and the cylindrical geometry were generated by Gamtec-II computer code. In the second compound the thickness of the recipient with plutonium nitrate are used with rectangular geometry and concret reflector. The effective multiplication constant was calculated with the Gamtec-II and Keno-IV library. The results show many differences. (E.G) [pt
Benchmark calculations by KENO-Va using the JEF 2.2 library
Energy Technology Data Exchange (ETDEWEB)
Markova, L.
1994-12-01
This work has to be a contribution to the validation of the JEF2.2 neutron cross-section libarary, following the earlier published benchmark calculations having been performed to validate the previous version JEF1.1 of the libarary. Several simple calculational problems and one experimental problem were chosen for a criticality calculations. In addition also a realistic hexagonal arrangement of the VVER-440 fuel assemblies in a spent fuel cask were analyzed in a partly cylindrized model. All criticality calculations, carried out by the KENO-Va code using the JEF2.2 neutron cross-section library in 172 energy groups, resulted in multiplication factors (k{sub eff}) which were tabulated and compared with the results of other available calculations of the same problems. (orig.).
Criticality benchmarks for COG: A new point-wise Monte Carlo code
International Nuclear Information System (INIS)
Alesso, H.P.; Pearson, J.; Choi, J.S.
1989-01-01
COG is a new point-wise Monte Carlo code being developed and tested at LLNL for the Cray computer. It solves the Boltzmann equation for the transport of neutrons, photons, and (in future versions) charged particles. Techniques included in the code for modifying the random walk of particles make COG most suitable for solving deep-penetration (shielding) problems. However, its point-wise cross-sections also make it effective for a wide variety of criticality problems. COG has some similarities to a number of other computer codes used in the shielding and criticality community. These include the Lawrence Livermore National Laboratory (LLNL) codes TART and ALICE, the Los Alamos National Laboratory code MCNP, the Oak Ridge National Laboratory codes 05R, 06R, KENO, and MORSE, the SACLAY code TRIPOLI, and the MAGI code SAM. Each code is a little different in its geometry input and its random-walk modification options. Validating COG consists in part of running benchmark calculations against critical experiments as well as other codes. The objective of this paper is to present calculational results of a variety of critical benchmark experiments using COG, and to present the resulting code bias. Numerous benchmark calculations have been completed for a wide variety of critical experiments which generally involve both simple and complex physical problems. The COG results, which they report in this paper, have been excellent
CFD-calculations to a core catcher benchmark
International Nuclear Information System (INIS)
Willschuetz, H.G.
1999-04-01
There are numerous experiments for the exploration of the corium spreading behaviour, but comparable data have not been available up to now in the field of the long term behaviour of a corium expanded in a core catcher. The difficulty consists in the experimental simulation of the decay heat that can be neglected for the short-run course of events like relocation and spreading, which must, however, be considered during investigation of the long time behaviour. Therefore the German GRS, defined together with Battelle Ingenieurtechnik a benchmark problem in order to determine particular problems and differences of CFD codes simulating an expanded corium and from this, requirements for a reasonable measurement of experiments, that will be performed later. First the finite-volume-codes Comet 1.023, CFX 4.2 and CFX-TASCflow were used. To be able to make comparisons to a finite-element-code, now calculations are performed at the Institute of Safety Research at the Forschungszentrum Rossendorf with the code ANSYS/FLOTRAN. For the benchmark calculations of stage 1 a pure and liquid melt with internal heat sources was assumed uniformly distributed over the area of the planned core catcher of a EPR plant. Using the Standard-k-ε-turbulence model and assuming an initial state of a motionless superheated melt several large convection rolls will establish within the melt pool. The temperatures at the surface do not sink to a solidification level due to the enhanced convection heat transfer. The temperature gradients at the surface are relatively flat while there are steep gradients at the ground where the no slip condition is applied. But even at the ground no solidification temperatures are observed. Although the problem in the ANSYS-calculations is handled two-dimensional and not three-dimensional like in the finite-volume-codes, there are no fundamental deviations to the results of the other codes. (orig.)
Benchmark Calculations on Halden IFA-650 LOCA Test Results
International Nuclear Information System (INIS)
Ek, Mirkka; Kekkonen, Laura; Kelppe, Seppo; Stengaard, J.O.; Josek, Radomir; Wiesenack, Wolfgang; Aounallah, Yacine; Wallin, Hannu; Grandjean, Claude; Herb, Joachim; Lerchl, Georg; Trambauer, Klaus; Sonnenburg, Heinz-Guenther; Nakajima, Tetsuo; Spykman, Gerold; Struzik, Christine
2010-01-01
The assessment of the consequences of a loss-of-coolant accident (LOCA) is to a large extent based on calculations carried out with codes especially developed for addressing the phenomena occurring during the transient. Since the time of the first LOCA experiments, which were largely conducted with fresh fuel, changes in fuel design, the introduction of new cladding materials and in particular the move to high burnup have not only generated a need to re-examine the LOCA safety criteria and to verify their continued validity, but also to confirm that codes show an appropriate performance especially with respect to high burnup phenomena influencing LOCA fuel behaviour. As part of international efforts, the OECD Halden Reactor Project program implemented a test series to address particular LOCA issues. Based on recommendations of a group of experts from the US NRC, EPRI, EDF, FRAMATOME-ANP and GNF, the primary objective of the experiments were defined as 1. Measure the extent of fuel (fragment) relocation into the ballooned region and evaluate its possible effect on cladding temperature and oxidation. 2. Investigate the extent (if any) of 'secondary transient hydriding' on the inner side of the cladding above and below the burst region. The Halden LOCA series, using high burnup fuel segments, contains test cases well suited for checking the ability of LOCA analysis codes to predict or reproduce the measurements and to provide clues as to where the codes need to be improved. The NEA Working Group on Fuel Safety, WGFS, therefore decided to conduct a code benchmark based on the Halden LOCA test series. Emphasis was on the codes' ability to predict or reproduce the thermal and mechanical response of fuel and cladding. Before starting the benchmark, participants were given the opportunity to tune their codes to the experimental system applied in the Halden LOCA tests. To this end, the data from the two commissioning runs were made available. The first of these runs went
Calculations of different transmutation concepts. An international benchmark exercise
International Nuclear Information System (INIS)
2000-01-01
In April 1996, the NEA Nuclear Science Committee (NSC) Expert Group on Physics Aspects of Different Transmutation Concepts launched a benchmark exercise to compare different transmutation concepts based on pressurised water reactors (PWRs), fast reactors, and an accelerator-driven system. The aim was to investigate the physics of complex fuel cycles involving reprocessing of spent PWR reactor fuel and its subsequent reuse in different reactor types. The objective was also to compare the calculated activities for individual isotopes as a function of time for different plutonium and minor actinide transmutation scenarios in different reactor systems. This report gives the analysis of results of the 15 solutions provided by the participants: six for the PWRs, six for the fast reactor and three for the accelerator case. Various computer codes and nuclear data libraries were applied. (author)
Criticality Benchmark Analysis of Water-Reflected Uranium Oxyfluoride Slabs
International Nuclear Information System (INIS)
Marshall, Margaret A.; Bess, John D.
2009-01-01
A series of twelve experiments were conducted in the mid 1950's at the Oak Ridge National Laboratory Critical Experiments Facility to determine the critical conditions of a semi-infinite water-reflected slab of aqueous uranium oxyfluoride (UO2F2). A different slab thickness was used for each experiment. Results from the twelve experiment recorded in the laboratory notebook were published in Reference 1. Seven of the twelve experiments were determined to be acceptable benchmark experiments for the inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. This evaluation will not only be available to handbook users for the validation of computer codes and integral cross-section data, but also for the reevaluation of experimental data used in the ANSI/ANS-8.1 standard. This evaluation is important as part of the technical basis of the subcritical slab limits in ANSI/ANS-8.1. The original publication of the experimental results was used for the determination of bias and bias uncertainties for subcritical slab limits, as documented by Hugh Clark's paper 'Subcritical Limits for Uranium-235 Systems'.
Criticality benchmark guide for light-water-reactor fuel in transportation and storage packages
International Nuclear Information System (INIS)
Lichtenwalter, J.J.; Bowman, S.M.; DeHart, M.D.; Hopper, C.M.
1997-03-01
This report is designed as a guide for performing criticality benchmark calculations for light-water-reactor (LWR) fuel applications. The guide provides documentation of 180 criticality experiments with geometries, materials, and neutron interaction characteristics representative of transportation packages containing LWR fuel or uranium oxide pellets or powder. These experiments should benefit the U.S. Nuclear Regulatory Commission (NRC) staff and licensees in validation of computational methods used in LWR fuel storage and transportation concerns. The experiments are classified by key parameters such as enrichment, water/fuel volume, hydrogen-to-fissile ratio (H/X), and lattice pitch. Groups of experiments with common features such as separator plates, shielding walls, and soluble boron are also identified. In addition, a sample validation using these experiments and a statistical analysis of the results are provided. Recommendations for selecting suitable experiments and determination of calculational bias and uncertainty are presented as part of this benchmark guide
MCNP benchmark analyses of critical experiments for the Space Nuclear Thermal Propulsion program
International Nuclear Information System (INIS)
Selcow, E.C.; Cerbone, R.J.; Ludewig, H.; Mughabghab, S.F.; Schmidt, E.; Todosow, M.; Parma, E.J.; Ball, R.M.; Hoovler, G.S.
1993-01-01
Benchmark analyses have been performed of Particle Bed Reactor (PBR) critical experiments (CX) using the MCNP radiation transport code. The experiments have been conducted at the Sandia National Laboratory reactor facility in support of the Space Nuclear Thermal Propulsion (SNTP) program. The test reactor is a nineteen element water moderated and reflected thermal system. A series of integral experiments have been carried out to test the capabilities of the radiation transport codes to predict the performance of PBR systems. MCNP was selected as the preferred radiation analysis tool for the benchmark experiments. Comparison between experimental and calculational results indicate close agreement. This paper describes the analyses of benchmark experiments designed to quantify the accuracy of the MCNP radiation transport code for predicting the performance characteristics of PBR reactors
MCNP benchmark analyses of critical experiments for the Space Nuclear Thermal Propulsion program
Selcow, Elizabeth C.; Cerbone, Ralph J.; Ludewig, Hans; Mughabghab, Said F.; Schmidt, Eldon; Todosow, Michael; Parma, Edward J.; Ball, Russell M.; Hoovler, Gary S.
1993-01-01
Benchmark analyses have been performed of Particle Bed Reactor (PBR) critical experiments (CX) using the MCNP radiation transport code. The experiments have been conducted at the Sandia National Laboratory reactor facility in support of the Space Nuclear Thermal Propulsion (SNTP) program. The test reactor is a nineteen element water moderated and reflected thermal system. A series of integral experiments have been carried out to test the capabilities of the radiation transport codes to predict the performance of PBR systems. MCNP was selected as the preferred radiation analysis tool for the benchmark experiments. Comparison between experimental and calculational results indicate close agreement. This paper describes the analyses of benchmark experiments designed to quantify the accuracy of the MCNP radiation transport code for predicting the performance characteristics of PBR reactors.
International Nuclear Information System (INIS)
Bakkari, B El; Bardouni, T El.; Erradi, L.; Chakir, E.; Meroun, O.; Azahra, M.; Boukhal, H.; Khoukhi, T El.; Htet, A.
2007-01-01
Full text: New releases of nuclear data files made available during the few recent years. The reference MCNP5 code (1) for Monte Carlo calculations is usually distributed with only one standard nuclear data library for neutron interactions based on ENDF/B-VI. The main goal of this work is to process new neutron cross sections libraries in ACE continuous format for MCNP code based on the most recent data files recently made available for the scientific community : ENDF/B-VII.b2, ENDF/B-VI (release 8), JEFF3.0, JEFF-3.1, JENDL-3.3 and JEF2.2. In our data treatment, we used the modular NJOY system (release 99.9) (2) in conjunction with its most recent upadates. Assessment of the processed point wise cross sections libraries performances was made by means of some criticality prediction and analysis of other integral parameters for a set of reactor benchmarks. Almost all the analyzed benchmarks were taken from the international handbook of Evaluated criticality safety benchmarks experiments from OECD (3). Some revised benchmarks were taken from references (4,5). These benchmarks use Pu-239 or U-235 as the main fissionable materiel in different forms, different enrichments and cover various geometries. Monte Carlo calculations were performed in 3D with maximum details of benchmark description and the S(α,β) cross section treatment was adopted in all thermal cases. The resulting one standard deviation confidence interval for the eigenvalue is typically +/-13% to +/-20 pcm [fr
Testing of cross section libraries for TRIGA criticality benchmark
International Nuclear Information System (INIS)
Snoj, L.; Trkov, A.; Ravnik, M.
2007-01-01
Influence of various up-to-date cross section libraries on the multiplication factor of TRIGA benchmark as well as the influence of fuel composition on the multiplication factor of the system composed of various types of TRIGA fuel elements was investigated. It was observed that keff calculated by using the ENDF/B VII cross section library is systematically higher than using the ENDF/B-VI cross section library. The main contributions (∼ 2 20 pcm) are from 235 U and Zr. (author)
DEFF Research Database (Denmark)
Grandjean, Philippe; Budtz-Joergensen, Esben
2013-01-01
BACKGROUND: Immune suppression may be a critical effect associated with exposure to perfluorinated compounds (PFCs), as indicated by recent data on vaccine antibody responses in children. Therefore, this information may be crucial when deciding on exposure limits. METHODS: Results obtained from...... follow-up of a Faroese birth cohort were used. Serum-PFC concentrations were measured at age 5 years, and serum antibody concentrations against tetanus and diphtheria toxoids were obtained at ages 7 years. Benchmark dose results were calculated in terms of serum concentrations for 431 children...
Benchmarking criticality analysis of TRIGA fuel storage racks.
Robinson, Matthew Loren; DeBey, Timothy M; Higginbotham, Jack F
2017-01-01
A criticality analysis was benchmarked to sub-criticality measurements of the hexagonal fuel storage racks at the United States Geological Survey TRIGA MARK I reactor in Denver. These racks, which hold up to 19 fuel elements each, are arranged at 0.61m (2 feet) spacings around the outer edge of the reactor. A 3-dimensional model was created of the racks using MCNP5, and the model was verified experimentally by comparison to measured subcritical multiplication data collected in an approach to critical loading of two of the racks. The validated model was then used to show that in the extreme condition where the entire circumference of the pool was lined with racks loaded with used fuel the storage array is subcritical with a k value of about 0.71; well below the regulatory limit of 0.8. A model was also constructed of the rectangular 2×10 fuel storage array used in many other TRIGA reactors to validate the technique against the original TRIGA licensing sub-critical analysis performed in 1966. The fuel used in this study was standard 20% enriched (LEU) aluminum or stainless steel clad TRIGA fuel. Copyright Â© 2016. Published by Elsevier Ltd.
An improved benchmark model for the Big Ten critical assembly - 021
International Nuclear Information System (INIS)
Mosteller, R.D.
2010-01-01
A new benchmark specification is developed for the BIG TEN uranium critical assembly. The assembly has a fast spectrum, and its core contains approximately 10 wt.% enriched uranium. Detailed specifications for the benchmark are provided, and results from the MCNP5 Monte Carlo code using a variety of nuclear-data libraries are given for this benchmark and two others. (authors)
Benchmark calculations in multigroup and multidimensional time-dependent transport
International Nuclear Information System (INIS)
Ganapol, B.D.; Musso, E.; Ravetto, P.; Sumini, M.
1990-01-01
It is widely recognized that reliable benchmarks are essential in many technical fields in order to assess the response of any approximation to the physics of the problem to be treated and to verify the performance of the numerical methods used. The best possible benchmarks are analytical solutions to paradigmatic problems where no approximations are actually introduced and the only error encountered is connected to the limitations of computational algorithms. Another major advantage of analytical solutions is that they allow a deeper understanding of the physical features of the model, which is essential for the intelligent use of complicated codes. In neutron transport theory, the need for benchmarks is particularly great. In this paper, the authors propose to establish accurate numerical solutions to some problems concerning the migration of neutron pulses. Use will be made of the space asymptotic theory, coupled with a Laplace transformation inverted by a numerical technique directly evaluating the inversion integral
Analysis and evaluation of critical experiments for validation of neutron transport calculations
International Nuclear Information System (INIS)
Bazzana, S.; Blaumann, H; Marquez Damian, J.I
2009-01-01
The calculation schemes, computational codes and nuclear data used in neutronic design require validation to obtain reliable results. In the nuclear criticality safety field this reliability also translates into a higher level of safety in procedures involving fissile material. The International Criticality Safety Benchmark Evaluation Project is an OECD/NEA activity led by the United States, in which participants from over 20 countries evaluate and publish criticality safety benchmarks. The product of this project is a set of benchmark experiment evaluations that are published annually in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. With the recent participation of Argentina, this information is now available for use by the neutron calculation and criticality safety groups in Argentina. This work presents the methodology used for the evaluation of experimental data, some results obtained by the application of these methods, and some examples of the data available in the Handbook. [es
New calculations for critical assemblies using MCNP4B
International Nuclear Information System (INIS)
Adams, A.A.; Frankle, S.C.; Little, R.C.
1997-07-01
A suite of 41 criticality benchmarks has been modeled using MCNP trademark (version 4B). Most of the assembly specifications were obtained from the Cross Section Evaluation Working Group (CSEWG) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) compendiums of experimental benchmarks. A few assembly specifications were obtained from experimental papers. The suite contains thermal and fast assemblies, bare and reflected assemblies, and emphasizes 233 U, 235 U, 238 U, and 239 Pu. The values of k eff for each assembly in the suite were calculated using MCNP libraries derived primarily from release 2 of ENDF/B-V and release 2 of ENDF/B-VI. The results show that the new ENDF/B-VI.2 evaluations for H, O, N, B, 235 U, 238 U, and 239 Pu can have a significant impact on the values of k eff . In addition to the integral quantity k eff , several additional experimental measurements were performed and documented. These experimental measurements include central fission and reaction-rate ratios for various isotopes, and neutron leakage and flux spectra. They provide more detailed information about the accuracy of the nuclear data than can k eff . Comparison calculations were performed using both ENDF/B-V.2 and ENDF/B-VI.2-based data libraries. The purpose of this paper is to compare the results of these additional calculations with experimental data, and to use these results to assess the quality of the nuclear data
Calculation of the fifth atomic energy research dynamic benchmark with APROS
International Nuclear Information System (INIS)
Puska Eija Karita; Kontio Harii
1998-01-01
The band-out presents the model used for calculation of the fifth atomic energy research dynamic benchmark with APROS code. In the calculation of the fifth atomic energy research dynamic benchmark the three-dimensional neutronics model of APROS was used. The core was divided axially into 20 nodes according to the specifications of the benchmark and each six identical fuel assemblies were placed into one one-dimensional thermal hydraulic channel. The five-equation thermal hydraulic model was used in the benchmark. The plant process and automation was described with a generic WWER-440 plant model created by IVO Power Engineering Ltd. - Finland. (Author)
Criticality experiments to provide benchmark data on neutron flux traps
International Nuclear Information System (INIS)
Bierman, S.R.
1988-06-01
The experimental measurements covered by this report were designed to provide benchmark type data on water moderated LWR type fuel arrays containing neutron flux traps. The experiments were performed at the US Department of Energy Hanford Critical Mass Laboratory, operated by Pacific Northwest Laboratory. The experimental assemblies consisted of 2 /times/ 2 arrays of 4.31 wt % 235 U enriched UO 2 fuel rods, uniformly arranged in water on a 1.891 cm square center-to-center spacing. Neutron flux traps were created between the fuel units using metal plates containing varying amounts of boron. Measurements were made to determine the effect that boron loading and distance between the fuel and flux trap had on the amount of fuel required for criticality. Also, measurements were made, using the pulse neutron source technique, to determine the effect of boron loading on the effective neutron multiplications constant. On two assemblies, reaction rate measurements were made using solid state track recorders to determine absolute fission rates in 235 U and 238 U. 14 refs., 12 figs., 7 tabs
Analysis of the international criticality benchmark no 19 of a realistic fuel dissolver
International Nuclear Information System (INIS)
Smith, H.J.; Santamarina, A.
1991-01-01
The dispersion of the order of 12000 pcm in the results of the international criticality fuel dissolver benchmark calculation, exercise OECD/19, showed the necessity of analysing the calculational methods used in this case. The APOLLO/PIC method developed to treat this type of problem permits us to propose international reference values. The problem studied here, led us to investigate two supplementary parameters in addition to the double heterogeneity of the fuel: the reactivity variation as a function of moderation and the effects of the size of the fuel pellets during dissolution. The following conclusions were obtained: The fast cross-section sets used by the international SCALE package introduces a bias of - 3000 pcm in undermoderated lattices. More generally, the fast and resonance nuclear data in criticality codes are not sufficiently reliable. Geometries with micro-pellets led to an underestimation of reactivity at the end of dissolution of 3000 pcm in certain 1988 Sn calculations; this bias was avoided in the up-dated 1990 computation because of a correct use of calculation tools. The reactivity introduced by the dissolved fuel is underestimated by 3000 pcm in contributions based on the standard NITAWL module in the SCALE code. More generally, the neutron balance analysis pointed out that standard ND self shielding formalism cannot account for 238 U resonance mutual self-shielding in the pellet-fissile liquor interaction. The combination of these three types of bias explain the underestimation of all of the international contributions of the reactivity of dissolver lattices by -2000 to -6000 pcm. The improved 1990 calculations confirm the need to use rigorous methods in the calculation of systems which involve the fuel double heterogeneity. This study points out the importance of periodic benchmarking exercises for probing the efficacity of criticality codes, data libraries and the users
Stationary PWR-calculations by means of LWRSIM at the NEACRP 3D-LWRCT benchmark
International Nuclear Information System (INIS)
Van de Wetering, T.F.H.
1993-01-01
Within the framework of participation in an international benchmark, calculations were executed by means of an adjusted version of the computer code Light Water Reactor SIMulation (LWRSIM) for three-dimensional reactor core calculations of pressurized water reactors. The 3-D LWR Core Transient Benchmark was set up aimed at the comparison of 3-D computer codes for transient calculations in LWRs. Participation in the benchmark provided more insight in the accuracy of the code when applied for other pressurized water reactors than applied for the nuclear power plant Borssele in the Netherlands, for which the code has been developed and used originally
Preparation of a criticality benchmark based on experiments performed at the RA-6 reactor
International Nuclear Information System (INIS)
Bazzana, S.; Blaumann, H; Marquez Damian, J.I
2009-01-01
The operation and fuel management of a reactor uses neutronic modeling to predict its behavior in operational and accidental conditions. This modeling uses computational tools and nuclear data that must be contrasted against benchmark experiments to ensure its accuracy. These benchmarks have to be simple enough to be possible to model with the desired computer code and have quantified and bound uncertainties. The start-up of the RA-6 reactor, final stage of the conversion and renewal project, allowed us to obtain experimental results with fresh fuel. In this condition the material composition of the fuel elements is precisely known, which contributes to a more precise modeling of the critical condition. These experimental results are useful to evaluate the precision of the models used to design the core, based on U 3 Si 2 and cadmium wires as burnable poisons, for which no data was previously available. The analysis of this information can be used to validate models for the analysis of similar configurations, which is necessary to follow the operational history of the reactor and perform fuel management. The analysis of the results and the generation of the model were done following the methodology established by International Criticality Safety Benchmark Evaluation Project, which gathers and analyzes experimental data for critical systems. The results were very satisfactory resulting on a value for the multiplication factor of the model of 1.0000 ± 0.0044, and a calculated value of 0.9980 ± 0.0001 using MCNP 5 and ENDF/B-VI. The utilization of as-built dimensions and compositions, and the sensitivity analysis allowed us to review the design calculations and analyze their precision, accuracy and error compensation. [es
HTR-PROTEUS benchmark calculations. Pt. 1. Unit cell results LEUPRO-1 and LEUPRO-2
International Nuclear Information System (INIS)
Hogenbirk, A.; Stad, R.C.L. van der; Janssen, A.J.; Klippel, H.T.; Kuijper, J.C.
1995-09-01
In the framework of the IAEA Co-ordinated Research Programme (CRP) on 'Validation of Safety Related Physics Calculations for Low-Enriched (LEU) HTGRs' calculational benchmarks are performed on the basis of LEU-HTR pebble-bed critical experiments carried out in the PROTEUS facility at PSI, Switzerland. Of special interest is the treatment of the double heterogeneity of the fuel and the spherical fuel elements of these pebble bed core configurations. Also of interest is the proper calculation of the safety related physics parameters like the effect of water ingress and control rod worth. This document describes the ECN results of the LEUPRO-1 and LEUPRO-2 unitcell calculations performed with the codes WIMS-E, SCALE-4 and MCNP4A. Results of the LEUPRO-1 unit cell with 20% water ingress in the void is also reported for both the single and the double heterogeneous case. Emphasis is put on the intercomparison of the results obtained by the deterministic codes WIMS-E and SCALE-4, and the Monte Carlo code MCNP4A. The LEUPRO whole core calculations will be reported later. (orig.)
Benchmark criticality experiments for fast fission configuration with high enriched nuclear fuel
International Nuclear Information System (INIS)
Sikorin, S.N.; Mandzik, S.G.; Polazau, S.A.; Hryharovich, T.K.; Damarad, Y.V.; Palahina, Y.A.
2014-01-01
Benchmark criticality experiments of fast heterogeneous configuration with high enriched uranium (HEU) nuclear fuel were performed using the 'Giacint' critical assembly of the Joint Institute for Power and Nuclear Research - Sosny (JIPNR-Sosny) of the National Academy of Sciences of Belarus. The critical assembly core comprised fuel assemblies without a casing for the 34.8 mm wrench. Fuel assemblies contain 19 fuel rods of two types. The first type is metal uranium fuel rods with 90% enrichment by U-235; the second one is dioxide uranium fuel rods with 36% enrichment by U-235. The total fuel rods length is 620 mm, and the active fuel length is 500 mm. The outer fuel rods diameter is 7 mm, the wall is 0.2 mm thick, and the fuel material diameter is 6.4 mm. The clad material is stainless steel. The side radial reflector: the inner layer of beryllium, and the outer layer of stainless steel. The top and bottom axial reflectors are of stainless steel. The analysis of the experimental results obtained from these benchmark experiments by developing detailed calculation models and performing simulations for the different experiments is presented. The sensitivity of the obtained results for the material specifications and the modeling details were examined. The analyses used the MCNP and MCU computer programs. This paper presents the experimental and analytical results. (authors)
3-D extension C5G7 MOX benchmark calculation using threedant code
International Nuclear Information System (INIS)
Kim, H.Ch.; Han, Ch.Y.; Kim, J.K.; Na, B.Ch.
2005-01-01
It pursued the benchmark on deterministic 3-D MOX fuel assembly transport calculations without spatial homogenization (C5G7 MOX Benchmark Extension). The goal of this benchmark is to provide a more through test results for the abilities of current available 3-D methods to handle the spatial heterogeneities of reactor core. The benchmark requires solutions in the form of normalized pin powers as well as the eigenvalue for each of the control rod configurations; without rod, with A rods, and with B rods. In this work, the DANTSYS code package was applied to analyze the 3-D Extension C5G7 MOX Benchmark problems. The THREEDANT code within the DANTSYS code package, which solves the 3-D transport equation in x-y-z, and r-z-theta geometries, was employed to perform the benchmark calculations. To analyze the benchmark with the THREEDANT code, proper spatial and angular approximations were made. Several calculations were performed to investigate the effects of the different spatial approximations on the accuracy. The results from these sensitivity studies were analyzed and discussed. From the results, it is found that the 4*4 grid per pin cell is sufficiently refined so that very little benefit is obtained by increasing the mesh size. (authors)
A proposal of a benchmark for calculation of the power distribution next to the absorber
International Nuclear Information System (INIS)
Temesvari, E.; Hordosy, G.; Maraczy, Cs.; Hegyi, Gy.; Kereszturi, A.
1999-01-01
A proposal of a new benchmark problem was formulated to consider the characteristics of the VVER-440 fuel assembly with enrichment zoning, i. e. to study the space dependence of the power distribution near to a control assembly. A quite detailed geometry and the material composition of the fuel and the control assemblies were modeled by the help of MCNP calculations in AEKI. The results of the MCNP calculations were built in the KARATE code system as the new albedo matrices. The comparison of the KARATE calculation results and the MCNP calculations for this benchmark is presented. (Authors)
Benchmark calculations on resonance absorption by 238U in a PWR pin-cell geometry
International Nuclear Information System (INIS)
Kruijf, W.J.M. de; Janssen, A.J.
1993-12-01
Very accurate Monte Carlo calculations with MCNP have been performed to serve as a reference for benchmark calculations on resonance absorption by 238 U in a typical PWR pin-cell geometry. Calculations with the energy-pointwise slowing down code ROLAIDS-CPM show that this code calculates the resonance absorption accurately. Calculations with the multigroup discrete ordinates code XSDRN show that accurate results can only be achieved with a very fine energy mesh. (orig.)
Interactions of model biomolecules. Benchmark CC calculations within MOLCAS
Energy Technology Data Exchange (ETDEWEB)
Urban, Miroslav [Slovak University of Technology in Bratislava, Faculty of Materials Science and Technology in Trnava, Institute of Materials Science, Bottova 25, SK-917 24 Trnava, Slovakia and Department of Physical and Theoretical Chemistry, Faculty of Natural Scie (Slovakia); Pitoňák, Michal; Neogrády, Pavel; Dedíková, Pavlína [Department of Physical and Theoretical Chemistry, Faculty of Natural Sciences, Comenius University, Mlynská dolina, SK-842 15 Bratislava (Slovakia); Hobza, Pavel [Institute of Organic Chemistry and Biochemistry and Center for Complex Molecular Systems and biomolecules, Academy of Sciences of the Czech Republic, Prague (Czech Republic)
2015-01-22
We present results using the OVOS approach (Optimized Virtual Orbitals Space) aimed at enhancing the effectiveness of the Coupled Cluster calculations. This approach allows to reduce the total computer time required for large-scale CCSD(T) calculations about ten times when the original full virtual space is reduced to about 50% of its original size without affecting the accuracy. The method is implemented in the MOLCAS computer program. When combined with the Cholesky decomposition of the two-electron integrals and suitable parallelization it allows calculations which were formerly prohibitively too demanding. We focused ourselves to accurate calculations of the hydrogen bonded and the stacking interactions of the model biomolecules. Interaction energies of the formaldehyde, formamide, benzene, and uracil dimers and the three-body contributions in the cytosine – guanine tetramer are presented. Other applications, as the electron affinity of the uracil affected by solvation are also shortly mentioned.
Beretta Sergio; Dossi Andrea; Grove Hugh
2000-01-01
Due to their particular nature, the benchmarking methodologies tend to exceed the boundaries of management techniques, and to enter the territories of managerial culture. A culture that is also destined to break into the accounting area not only strongly supporting the possibility of fixing targets, and measuring and comparing the performance (an aspect that is already innovative and that is worthy of attention), but also questioning one of the principles (or taboos) of the accounting or...
Calculation of Critical Temperatures by Empirical Formulae
Directory of Open Access Journals (Sweden)
Trzaska J.
2016-06-01
Full Text Available The paper presents formulas used to calculate critical temperatures of structural steels. Equations that allow calculating temperatures Ac1, Ac3, Ms and Bs were elaborated based on the chemical composition of steel. To elaborate the equations the multiple regression method was used. Particular attention was paid to the collection of experimental data which was required to calculate regression coefficients, including preparation of data for calculation. The empirical data set included more than 500 chemical compositions of structural steel and has been prepared based on information available in literature on the subject.
Computer simulation of Masurca critical and subcritical experiments. Muse-4 benchmark. Final report
International Nuclear Information System (INIS)
2006-01-01
The efficient and safe management of spent fuel produced during the operation of commercial nuclear power plants is an important issue. In this context, partitioning and transmutation (P and T) of minor actinides and long-lived fission products can play an important role, significantly reducing the burden on geological repositories of nuclear waste and allowing their more effective use. Various systems, including existing reactors, fast reactors and advanced systems have been considered to optimise the transmutation scheme. Recently, many countries have shown interest in accelerator-driven systems (ADS) due to their potential for transmutation of minor actinides. Much R and D work is still required in order to demonstrate their desired capability as a whole system, and the current analysis methods and nuclear data for minor actinide burners are not as well established as those for conventionally-fuelled systems. Recognizing a need for code and data validation in this area, the Nuclear Science Committee of the OECD/NEA has organised various theoretical benchmarks on ADS burners. Many improvements and clarifications concerning nuclear data and calculation methods have been achieved. However, some significant discrepancies for important parameters are not fully understood and still require clarification. Therefore, this international benchmark based on MASURCA experiments, which were carried out under the auspices of the EC 5. Framework Programme, was launched in December 2001 in co-operation with the CEA (France) and CIEMAT (Spain). The benchmark model was oriented to compare simulation predictions based on available codes and nuclear data libraries with experimental data related to TRU transmutation, criticality constants and time evolution of the neutronic flux following source variation, within liquid metal fast subcritical systems. A total of 16 different institutions participated in this first experiment based benchmark, providing 34 solutions. The large number
Calculations with ANSYS/FLOTRAN to a core catcher benchmark
International Nuclear Information System (INIS)
Willschuetz, H.G.
1999-01-01
There are numerous experiments for the exploration of the corium spreading behaviour, but comparable data have not been available up to now in the field of the long-term behaviour of a corium expanded in a core catcher. For the calculations a pure liquid oxidic melt with a homogeneous internal heat source was assumed. The melt was distributed uniformly over the spreading area of the EPR core catcher. All codes applied the well known k-ε-turbulence-model to simulate the turbulent flow regime of this melt configuration. While the FVM-code calculations were performed with three dimensional models using a simple symmetry, the problem was modelled two-dimensionally with ANSYS due to limited CPU performance. In addition, the 2D results of ANSYS should allow a comparison for the planned second stage of the calculations. In this second stage, the behaviour of a segregated metal oxide melt should be examined. However, first estimates and pre-calculations showed that a 3D simulation of the problem is not possible with any of the codes due to lacking computer performance. (orig.)
DRY TRANSFER FACILITY CRITICALITY SAFETY CALCULATIONS
International Nuclear Information System (INIS)
C.E. Sanders
2005-01-01
This design calculation updates the previous criticality evaluation for the fuel handling, transfer, and staging operations to be performed in the Dry Transfer Facility (DTF) including the remediation area. The purpose of the calculation is to demonstrate that operations performed in the DTF and RF meet the nuclear criticality safety design criteria specified in the ''Project Design Criteria (PDC) Document'' (BSC 2004 [DIRS 171599], Section 4.9.2.2), the nuclear facility safety requirement in ''Project Requirements Document'' (Canori and Leitner 2003 [DIRS 166275], p. 4-206), the functional/operational nuclear safety requirement in the ''Project Functional and Operational Requirements'' document (Curry 2004 [DIRS 170557], p. 75), and the functional nuclear criticality safety requirements described in the ''Dry Transfer Facility Description Document'' (BSC 2005 [DIRS 173737], p. 3-8). A description of the changes is as follows: (1) Update the supporting calculations for the various Category 1 and 2 event sequences as identified in the ''Categorization of Event Sequences for License Application'' (BSC 2005 [DIRS 171429], Section 7). (2) Update the criticality safety calculations for the DTF staging racks and the remediation pool to reflect the current design. This design calculation focuses on commercial spent nuclear fuel (SNF) assemblies, i.e., pressurized water reactor (PWR) and boiling water reactor (BWR) SNF. U.S. Department of Energy (DOE) Environmental Management (EM) owned SNF is evaluated in depth in the ''Canister Handling Facility Criticality Safety Calculations'' (BSC 2005 [DIRS 173284]) and is also applicable to DTF operations. Further, the design and safety analyses of the naval SNF canisters are the responsibility of the U.S. Department of the Navy (Naval Nuclear Propulsion Program) and will not be included in this document. Also, note that the results for the Monitored Geologic Repository (MGR) Site specific Cask (MSC) calculations are limited to the
Montecarlo calculation for a benchmark on interactive effects of Gadolinium poisoned pins in BWRs
International Nuclear Information System (INIS)
Borgia, M.G.; Casali, F.; Cepraga, D.
1985-01-01
K infinite and burn-up calculations have been done in the frame of a benchmark organized by Physic Reactor Committee of NEA. The calculations, performed by the Montecarlo code KIM, concerned BWR lattices having UO*L2 fuel rodlets with and without gadolinium oxide
Benchmark calculations on residue production within the EURISOL DS project; Part I: thin targets
David, J.C; Boudard, A; Doré, D; Leray, S; Rapp, B; Ridikas, D; Thiollière, N
Report on benchmark calculations on residue production in thin targets. Calculations were performed using MCNPX 2.5.0 coupled to a selection of reaction models. The results were compared to nuclide production cross-sections measured in GSI in inverse kinematics
Benchmark calculations on residue production within the EURISOL DS project; Part II: thick targets
David, J.-C; Boudard, A; Doré, D; Leray, S; Rapp, B; Ridikas, D; Thiollière, N
Benchmark calculations on residue production using MCNPX 2.5.0. Calculations were compared to mass-distribution data for 5 different elements measured at ISOLDE, and to specific activities of 28 radionuclides in different places along the thick target measured in Dubna.
Evaluation and validation of criticality codes for fuel dissolver calculations
International Nuclear Information System (INIS)
Santamarina, A.; Smith, H.J.; Whitesides, G.E.
1991-01-01
During the past ten years an OECD/NEA Criticality Working Group has examined the validity of criticality safety computational methods. International calculation tools which were shown to be valid in systems for which experimental data existed were demonstrated to be inadequate when extrapolated to fuel dissolver media. A theoretical study of the main physical parameters involved in fuel dissolution calculations was performed, i.e. range of moderation, variation of pellet size and the fuel double heterogeneity effect. The APOLLO/P IC method developed to treat this latter effect permits us to supply the actual reactivity variation with pellet dissolution and to propose international reference values. The disagreement among contributors' calculations was analyzed through a neutron balance breakdown, based on three-group microscopic reaction rates. The results pointed out that fast and resonance nuclear data in criticality codes are not sufficiently reliable. Moreover the neutron balance analysis emphasized the inadequacy of the standard self-shielding formalism to account for 238 U resonance mutual self-shielding in the pellet-fissile liquor interaction. The benchmark exercise has resolved a potentially dangerous inadequacy in dissolver calculations. (author)
JNC results of BFS-62-3A benchmark calculation (CRP: Phase 5)
International Nuclear Information System (INIS)
Ishikawa, M.
2004-01-01
The present work is the results of JNC, Japan, for the Phase 5 of IAEA CRP benchmark problem (BFS-62-3A critical experiment). Analytical Method of JNC is based on Nuclear Data Library JENDL-3.2; Group Constant Set JFS-3-J3.2R: 70-group, ABBN-type self-shielding factor table based on JENDL-3.2; Effective Cross-section - Current-weighted multigroup transport cross-section. Cell model for the BFS as-built tube and pellets was (Case 1) Homogeneous Model based on IPPE definition; (Case 2) Homogeneous atomic density equivalent to JNC's heterogeneous calculation only to cross-check the adjusted correction factors; (Case 3) Heterogeneous model based on JNC's evaluation, One-dimensional plate-stretch model with Tone's background cross-section method (CASUP code). Basic diffusion Calculation was done in 18-groups and three-dimensional Hex-Z model (by the CITATION code), with Isotropic diffusion coefficients (Case 1 and 2), and Benoist's anisotropic diffusion coefficients (Case 3). For sodium void reactivity, the exact perturbation theory was applied both to basic calculation and correction calculations, ultra-fine energy group correction - approx. 100,000 group constants below 50 keV, and ABBN-type 175 group constants with shielding factors above 50 keV. Transport theory and mesh size correction 18-group, was used for three-dimensional Hex-Z model (the MINIHEX code based on the S4-P0 transport method, which was developed by JNC. Effective delayed Neutron fraction in the reactivity scale was fixed at 0.00623 by IPPE evaluation. Analytical Results of criticality values and sodium void reactivity coefficient obtained by JNC are presented. JNC made a cross-check of the homogeneous model and the adjusted correction factors submitted by IPPE, and confirmed they are consistent. JNC standard system showed quite satisfactory analytical results for the criticality and the sodium void reactivity of BFS-62-3A experiment. JNC calculated the cross-section sensitivity coefficients of BFS
Criticality criteria for submissions based on calculations
International Nuclear Information System (INIS)
Burgess, M.H.
1975-06-01
Calculations used in criticality clearances are subject to errors from various sources, and allowance must be made for these errors is assessing the safety of a system. A simple set of guidelines is defined, drawing attention to each source of error, and recommendations as to its application are made. (author)
Benchmark calculations for evaluation methods of gas volumetric leakage rate
International Nuclear Information System (INIS)
Asano, R.; Aritomi, M.; Matsuzaki, M.
1998-01-01
A containment function of radioactive materials transport casks is essential for safe transportation to prevent the radioactive materials from being released into environment. Regulations such as IAEA standard determined the limit of radioactivity to be released. Since is not practical for the leakage tests to measure directly the radioactivity release from a package, as gas volumetric leakages rates are proposed in ANSI N14.5 and ISO standards. In our previous works, gas volumetric leakage rates for several kinds of gas from various leaks were measured and two evaluation methods, 'a simple evaluation method' and 'a strict evaluation method', were proposed based on the results. The simple evaluation method considers the friction loss of laminar flow with expansion effect. The strict evaluating method considers an exit loss in addition to the friction loss. In this study, four worked examples were completed for on assumed large spent fuel transport cask (Type B Package) with wet or dry capacity and at three transport conditions; normal transport with intact fuels or failed fuels, and an accident in transport. The standard leakage rates and criteria for two kinds of leak test were calculated for each example by each evaluation method. The following observations are made based upon the calculations and evaluations: the choked flow model of ANSI method greatly overestimates the criteria for tests ; the laminar flow models of both ANSI and ISO methods slightly overestimate the criteria for tests; the above two results are within the design margin for ordinary transport condition and all methods are useful for the evaluation; for severe condition such as failed fuel transportation, it should pay attention to apply a choked flow model of ANSI method. (authors)
International Nuclear Information System (INIS)
Williams, M.L.; Stallmann, F.W.; Maerker, R.E.; Kam, F.B.K.
1983-01-01
An accurate determination of damage fluence accumulated by reactor pressure vessels (RPV) as a function of time is essential in order to evaluate the vessel integrity for both pressurized thermal shock (PTS) transients and end-of-life considerations. The desired accuracy for neutron exposure parameters such as displacements per atom or fluence (E > 1 MeV) is of the order of 20 to 30%. However, these types of accuracies can only be obtained realistically by validation of nuclear data and calculational methods in benchmark facilities. The purposes of this paper are to review the needs and requirements for benchmark experiments, to discuss the status of current benchmark experiments, to summarize results and conclusions obtained so far, and to suggest areas where further benchmarking is needed
Evaluation and validation of criticality codes for fuel dissolver calculations
International Nuclear Information System (INIS)
Santamarina, A.; Smith, H.J.; Whitesides, G.E.
1991-01-01
During the past ten years an OECD/NEA Criticality Working Group has examined the validity of criticality safety computational methods. International calculation tools which were shown to be valid in systems for which experimental data existed were demonstrated to be inadequate when extrapolated to fuel dissolver media. The spread of the results in the international calculation amounted to ± 12,000 pcm in the realistic fuel dissolver exercise n degrees 19 proposed by BNFL, and to ± 25,000 pcm in the benchmark n degrees 20 in which fissile material in solid form is surrounded by fissile material in solution. A theoretical study of the main physical parameters involved in fuel dissolution calculations was performed, i.e. range of moderation, variation of pellet size and the fuel double heterogeneity effect. The APOLLO/P IC method developed to treat latter effect, permits us to supply the actual reactivity variation with pellet dissolution and to propose international reference values. The disagreement among contributors' calculations was analyzed through a neutron balance breakdown, based on three-group microscopic reaction rates solicited from the participants. The results pointed out that fast and resonance nuclear data in criticality codes are not sufficiently reliable. Moreover the neutron balance analysis emphasized the inadequacy of the standard self-shielding formalism (NITAWL in the international SCALE package) to account for 238 U resonance mutual self-shielding in the pellet-fissile liquor interaction. Improvements in the up-dated 1990 contributions, as do recent complementary reference calculations (MCNP, VIM, ultrafine slowing-down CGM calculation), confirm the need to use rigorous self-shielding methods in criticality design-oriented codes. 6 refs., 11 figs., 3 tabs
Benchmarking of MCNP for calculating dose rates at an interim storage facility for nuclear waste.
Heuel-Fabianek, Burkhard; Hille, Ralf
2005-01-01
During the operation of research facilities at Research Centre Jülich, Germany, nuclear waste is stored in drums and other vessels in an interim storage building on-site, which has a concrete shielding at the side walls. Owing to the lack of a well-defined source, measured gamma spectra were unfolded to determine the photon flux on the surface of the containers. The dose rate simulation, including the effects of skyshine, using the Monte Carlo transport code MCNP is compared with the measured dosimetric data at some locations in the vicinity of the interim storage building. The MCNP data for direct radiation confirm the data calculated using a point-kernel method. However, a comparison of the modelled dose rates for direct radiation and skyshine with the measured data demonstrate the need for a more precise definition of the source. Both the measured and the modelled dose rates verified the fact that the legal limits (<1 mSv a(-1)) are met in the area outside the perimeter fence of the storage building to which members of the public have access. Using container surface data (gamma spectra) to define the source may be a useful tool for practical calculations and additionally for benchmarking of computer codes if the discussed critical aspects with respect to the source can be addressed adequately.
Benchmark Calculations for Electron Collisions with Complex Atoms
International Nuclear Information System (INIS)
Zatsarinny, Oleg; Bartschat, Klaus
2014-01-01
The B-spline R-matrix (BSR) approach [1,2] is based on the non-perturbative close-coupling method. As such it is, in principle, based on an exact expansion of the solution of the time-independent Schrödinger equation, as an infinite sum/integral of N-electron target states coupled to the wave function of the scattering projectile. The N-electron target states, again, can in principle be calculated with almost arbitrary accuracy using sufficiently large configuration-interaction expansions and the correct interaction hamiltonian. In practice, of course, the infinite expansions have to be cut off in some way and the exact hamiltonian may not be available. In the collision part of the BSR method, the integral over the ionization continuum and the infinite sum over high-lying Rydberg states are replaced by a finite sum over square-integrable pseudo-states. Also, a number of inner shells are treated as (partially) inert, i.e., a minimum number of electrons are required in those subshells.
International Nuclear Information System (INIS)
Ford, W.E. III; Diggs, B.R.; Knight, J.R.; Greene, N.M.; Petrie, L.M.; Webster, C.C.; Westfall, R.M.; Wright, R.Q.; Williams, M.L.
1982-01-01
Characteristics and contents of the CSRL-V (Criticality Safety Reference Library based on ENDF/B-V data) 227-neutron-group AMPX master and pointwise cross-section libraries are described. Results obtained in using CSRL-V to calculate performance parameters of selected thermal reactor and criticality safety benchmarks are discussed
VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4
Energy Technology Data Exchange (ETDEWEB)
Ellis, RJ
2001-02-02
The Task Force on Reactor-Based Plutonium Disposition, now an Expert Group, was set up through the Organization for Economic Cooperation and Development/Nuclear Energy Agency to facilitate technical assessments of burning weapons-grade plutonium mixed-oxide (MOX) fuel in U.S. pressurized-water reactors and Russian VVER nuclear reactors. More than ten countries participated to advance the work of the Task Force in a major initiative, which was a blind benchmark study to compare code benchmark calculations against experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At the Oak Ridge National Laboratory, the HELIOS-1.4 code was used to perform a comprehensive study of pin-cell and core calculations for the VENUS-2 benchmark.
A Critical Thinking Benchmark for a Department of Agricultural Education and Studies
Perry, Dustin K.; Retallick, Michael S.; Paulsen, Thomas H.
2014-01-01
Due to an ever changing world where technology seemingly provides endless answers, today's higher education students must master a new skill set reflecting an emphasis on critical thinking, problem solving, and communications. The purpose of this study was to establish a departmental benchmark for critical thinking abilities of students majoring…
Calculation of Single Cell and Fuel Assembly IRIS Benchmarks Using WIMSD5B and GNOMER Codes
International Nuclear Information System (INIS)
Pevec, D.; Grgic, D.; Jecmenica, R.
2002-01-01
IRIS reactor (an acronym for International Reactor Innovative and Secure) is a modular, integral, light water cooled, small to medium power (100-335 MWe/module) reactor, which addresses the requirements defined by the United States Department of Energy for Generation IV nuclear energy systems, i.e., proliferation resistance, enhanced safety, improved economics, and waste reduction. An international consortium led by Westinghouse/BNFL was created for development of IRIS reactor; it includes universities, institutes, commercial companies, and utilities. Faculty of Electrical Engineering and Computing, University of Zagreb joined the consortium in year 2001, with the aim to take part in IRIS neutronics design and safety analyses of IRIS transients. A set of neutronic benchmarks for IRIS reactor was defined with the objective to compare results of all participants with exactly the same assumptions. In this paper a calculation of Benchmark 44 for IRIS reactor is described. Benchmark 44 is defined as a core depletion benchmark problem for specified IRIS reactor operating conditions (e.g., temperatures, moderator density) without feedback. Enriched boron, inhomogeneously distributed in axial direction, is used as an integral fuel burnable absorber (IFBA). The aim of this benchmark was to enable a more direct comparison of results of different code systems. Calculations of Benchmark 44 were performed using the modified CORD-2 code package. The CORD-2 code package consists of WIMSD and GNOMER codes. WIMSD is a well-known lattice spectrum calculation code. GNOMER solves the neutron diffusion equation in three-dimensional Cartesian geometry by the Green's function nodal method. The following parameters were obtained in Benchmark 44 analysis: effective multiplication factor as a function of burnup, nuclear peaking factor as a function of burnup, axial offset as a function of burnup, core-average axial power profile, core radial power profile, axial power profile for selected
International Nuclear Information System (INIS)
Corsi, F.
1985-01-01
In connection with the design of nuclear reactors components operating at elevated temperature, design criteria need a level of realism in the prediction of inelastic structural behaviour. This concept leads to the necessity of developing non linear computer programmes, and, as a consequence, to the problems of verification and qualification of these tools. Benchmark calculations allow to carry out these two actions, involving at the same time an increased level of confidence in complex phenomena analysis and in inelastic design calculations. With the financial and programmatic support of the Commission of the European Communities (CEE) a programme of elasto-plastic benchmark calculations relevant to the design of structural components for LMFBR has been undertaken by those Member States which are developing a fast reactor project. Four principal progressive aims were initially pointed out that brought to the decision to subdivide the Benchmark effort in a calculations series of four sequential steps: step 1 to 4. The present document tries to summarize Step 1 of the Benchmark exercise, to derive some conclusions on Step 1 by comparison of the results obtained with the various codes and to point out some concluding comments on the first action. It is to point out that even if the work was designed to test the capabilities of the computer codes, another aim was to increase the skill of the users concerned
The solution of the LEU and MOX WWER-1000 calculation benchmark with the CARATE - multicell code
International Nuclear Information System (INIS)
Hordosy, G.; Maraczy, Cs.
2000-01-01
Preparations for disposition of weapons grade plutonium in WWER-1000 reactors are in progress. Benchmark: Defined by the Kurchatov Institute (S. Bychkov, M. Kalugin, A. Lazarenko) to assess the applicability of computer codes for weapons grade MOX assembly calculations. Framework: 'Task force on reactor-based plutonium disposition' of OECD Nuclear Energy Agency. (Authors)
D.C. Blitz (David)
2011-01-01
textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.
Criticality calculation method for mixer-settlers
International Nuclear Information System (INIS)
Gonda, Kozo; Aoyagi, Haruki; Nakano, Ko; Kamikawa, Hiroshi.
1980-01-01
A new criticality calculation code MACPEX has been developed to evaluate and manage the criticality of the process in the extractor of mixer-settler type. MACPEX can perform the combined calculation with the PUREX process calculation code MIXSET, to get the neutron flux and the effective multiplication constant in the mixer-settlers. MACPEX solves one-dimensional diffusion equation by the explicit difference method and the standard source-iteration technique. The characteristics of MACPEX are as follows. 1) Group constants of 4 energy groups for the 239 Pu-H 2 O solution, water, polyethylene and SUS 28 are provided. 2) The group constants of the 239 Pu-H 2 O solution are given by the functional formulae of the plutonium concentration, which is less than 50 g/l. 3) Two boundary conditions of the vacuum condition and the reflective condition are available in this code. 4) The geometrical bucklings can be calculated for a certain energy group and/or region by using the three dimentional neutron flux profiles obtained by CITATION. 5) The buckling correction search can be carried out in order to get a desired k sub(eff). (author)
Validation of the criticality calculation for fuel elements using the Gamtec 2 - Keno 2 and 4
International Nuclear Information System (INIS)
Teixeira, M.C.C.; Andrade, M.C. de
1990-01-01
For criticality safety in the fabrication, storage and transportation of fuel assemblies, subcriticality analysis must be done. The calculations are performed at CDTN with the GAMTEC computer code, to homogenize the fuel assembly in order to create 16 group cross-section library, and with KENO code, for determining the multiplication factor. To validate the calculational method, suitable Benchmark experiments have been done. The results show that the calculational model overestimates kef when kef+ 2 σ was considered. (author) [pt
CANISTER HANDLING FACILITY CRITICALITY SAFETY CALCULATIONS
Energy Technology Data Exchange (ETDEWEB)
C.E. Sanders
2005-04-07
This design calculation revises and updates the previous criticality evaluation for the canister handling, transfer and staging operations to be performed in the Canister Handling Facility (CHF) documented in BSC [Bechtel SAIC Company] 2004 [DIRS 167614]. The purpose of the calculation is to demonstrate that the handling operations of canisters performed in the CHF meet the nuclear criticality safety design criteria specified in the ''Project Design Criteria (PDC) Document'' (BSC 2004 [DIRS 171599], Section 4.9.2.2), the nuclear facility safety requirement in ''Project Requirements Document'' (Canori and Leitner 2003 [DIRS 166275], p. 4-206), the functional/operational nuclear safety requirement in the ''Project Functional and Operational Requirements'' document (Curry 2004 [DIRS 170557], p. 75), and the functional nuclear criticality safety requirements described in the ''Canister Handling Facility Description Document'' (BSC 2004 [DIRS 168992], Sections 3.1.1.3.4.13 and 3.2.3). Specific scope of work contained in this activity consists of updating the Category 1 and 2 event sequence evaluations as identified in the ''Categorization of Event Sequences for License Application'' (BSC 2004 [DIRS 167268], Section 7). The CHF is limited in throughput capacity to handling sealed U.S. Department of Energy (DOE) spent nuclear fuel (SNF) and high-level radioactive waste (HLW) canisters, defense high-level radioactive waste (DHLW), naval canisters, multicanister overpacks (MCOs), vertical dual-purpose canisters (DPCs), and multipurpose canisters (MPCs) (if and when they become available) (BSC 2004 [DIRS 168992], p. 1-1). It should be noted that the design and safety analyses of the naval canisters are the responsibility of the U.S. Department of the Navy (Naval Nuclear Propulsion Program) and will not be included in this document. In addition, this calculation is valid for
CANISTER HANDLING FACILITY CRITICALITY SAFETY CALCULATIONS
International Nuclear Information System (INIS)
C.E. Sanders
2005-01-01
This design calculation revises and updates the previous criticality evaluation for the canister handling, transfer and staging operations to be performed in the Canister Handling Facility (CHF) documented in BSC [Bechtel SAIC Company] 2004 [DIRS 167614]. The purpose of the calculation is to demonstrate that the handling operations of canisters performed in the CHF meet the nuclear criticality safety design criteria specified in the ''Project Design Criteria (PDC) Document'' (BSC 2004 [DIRS 171599], Section 4.9.2.2), the nuclear facility safety requirement in ''Project Requirements Document'' (Canori and Leitner 2003 [DIRS 166275], p. 4-206), the functional/operational nuclear safety requirement in the ''Project Functional and Operational Requirements'' document (Curry 2004 [DIRS 170557], p. 75), and the functional nuclear criticality safety requirements described in the ''Canister Handling Facility Description Document'' (BSC 2004 [DIRS 168992], Sections 3.1.1.3.4.13 and 3.2.3). Specific scope of work contained in this activity consists of updating the Category 1 and 2 event sequence evaluations as identified in the ''Categorization of Event Sequences for License Application'' (BSC 2004 [DIRS 167268], Section 7). The CHF is limited in throughput capacity to handling sealed U.S. Department of Energy (DOE) spent nuclear fuel (SNF) and high-level radioactive waste (HLW) canisters, defense high-level radioactive waste (DHLW), naval canisters, multicanister overpacks (MCOs), vertical dual-purpose canisters (DPCs), and multipurpose canisters (MPCs) (if and when they become available) (BSC 2004 [DIRS 168992], p. 1-1). It should be noted that the design and safety analyses of the naval canisters are the responsibility of the U.S. Department of the Navy (Naval Nuclear Propulsion Program) and will not be included in this document. In addition, this calculation is valid for the current design of the CHF and may not reflect the ongoing design evolution of the facility
An integral nodal variational method for multigroup criticality calculations
International Nuclear Information System (INIS)
Lewis, E.E.; Tsoulfanidis, N.
2003-01-01
An integral formulation of the variational nodal method is presented and applied to a series of benchmark critically problems. The method combines an integral transport treatment of the even-parity flux within the spatial node with an odd-parity spherical harmonics expansion of the Lagrange multipliers at the node interfaces. The response matrices that result from this formulation are compatible with those in the VARIANT code at Argonne National Laboratory. Either homogeneous or heterogeneous nodes may be employed. In general, for calculations requiring higher-order angular approximations, the integral method yields solutions with comparable accuracy while requiring substantially less CPU time and memory than the standard spherical harmonics expansion using the same spatial approximations. (author)
Isopiestic density law of actinide nitrates applied to criticality calculations
International Nuclear Information System (INIS)
Leclaire, Nicolas; Anno, Jacques; Courtois, Gerard; Poullot, Gilles; Rouyer, Veronique
2003-01-01
Up to now, criticality safety experts used density laws fitted on experimental data and applied them in and outside the measurement range. Depending on the case, such an approach could be wrong for nitrate solutions. Seven components are concerned: UO 2 (NO 3 ) 2 , U(NO 3 ) 4 , Pu(NO 3 ) 4 , Pu(NO 3 ) 3 , Th(NO 3 ) 4 , Am(NO 3 ) 3 and HNO 3 . To get rid of this problem, a new methodology based on the thermodynamic concept of binary electrolytes solutions mixtures at constant water activity, so called 'isopiestic' solutions, has been developed by IRSN to calculate the nitrate solutions density. This article shortly presents the theoretical aspects of the method, its qualification using benchmarks and its implementation in IRSN graphical user interface. (author)
The fifth Atomic Energy Research dynamic benchmark calculation with HEXTRAN-SMABRE
International Nuclear Information System (INIS)
Haenaelaeinen, Anitta
1998-01-01
The fifth Atomic Energy Research dynamic benchmark is the first Atomic Energy Research benchmark for coupling of the thermohydraulic codes and three-dimensional reactor dynamic core models. In VTT HEXTRAN 2.7 is used for the core dynamics and SMABRE 4.6 as a thermohydraulic model for the primary and secondary loops. The plant model for SMABRE is based mainly on two input models. the Loviisa model and standard WWER-440/213 plant model. The primary circuit includes six separate loops, totally 505 nodes and 652 junctions. The reactor pressure vessel is divided into six parallel channels. In HEXTRAN calculation 176 symmetry is used in the core. In the sequence of main steam header break at the hot standby state, the liquid temperature is decreased symmetrically in the core inlet which leads to return to power. In the benchmark, no isolations of the steam generators are assumed and the maximum core power is about 38 % of the nominal power at four minutes after the break opening in the HEXTRAN-SMABRE calculation. Due to boric acid in the high pressure safety injection water, the power finally starts to decrease. The break flow is pure steam in the HEXTRAN-SMABRE calculation during the whole transient even in the swell levels in the steam generators are very high due to flashing. Because of sudden peaks in the preliminary results of the steam generator heat transfer, the SMABRE drift-flux model was modified. The new model is a simplified version of the EPRI correlation based on test data. The modified correlation behaves smoothly. In the calculations nuclear data is based on the ENDF/B-IV library and it has been evaluated with the CASMO-HEX code. The importance of the nuclear data was illustrated by repeating the benchmark calculation with using three different data sets. Optimal extensive data valid from hot to cold conditions were not available for all types of fuel enrichments needed in this benchmark.(Author)
US/JAERI calculational benchmarks for nuclear data and codes intercomparison. Article 8
International Nuclear Information System (INIS)
Youssef, M.Z.; Jung, J.; Sawan, M.E.; Nakagawa, M.; Mori, T.; Kosako, K.
1986-01-01
Prior to analyzing the integral experiments performed at the FNS facility at JAERI, both US and JAERI's analysts have agreed upon four calculational benchmark problems proposed by JAERI to intercompare results based on various codes and data base used independently by both countries. To compare codes the same data base is used (ENDF/B-IV). To compare nuclear data libraries, common codes were applied. Some of the benchmarks chosen were geometrically simple and consisted of a single material to clearly identify sources of discrepancies and thus help in analysing the integral experiments
Criticality calculations with MCNP trademark: A primer
International Nuclear Information System (INIS)
Harmon, C.D. II; Busch, R.D.; Briesmeister, J.F.; Forster, R.A.
1994-01-01
With the closure of many experimental facilities, the nuclear criticality safety analyst increasingly is required to rely on computer calculations to identify safe limits for the handling and storage of fissile materials. However, in many cases, the analyst has little experience with the specific codes available at his/her facility. This primer will help you, the analyst, understand and use the MCNP Monte Carlo code for nuclear criticality safety analyses. It assumes that you have a college education in a technical field. There is no assumption of familiarity with Monte Carlo codes in general or with MCNP in particular. Appendix A gives an introduction to Monte Carlo techniques. The primer is designed to teach by example, with each example illustrating two or three features of MCNP that are useful in criticality analyses. Beginning with a Quickstart chapter, the primer gives an overview of the basic requirements for MCNP input and allows you to run a simple criticality problem with MCNP. This chapter is not designed to explain either the input or the MCNP options in detail; but rather it introduces basic concepts that are further explained in following chapters. Each chapter begins with a list of basic objectives that identify the goal of the chapter, and a list of the individual MCNP features that are covered in detail in the unique chapter example problems. It is expected that on completion of the primer you will be comfortable using MCNP in criticality calculations and will be capable of handling 80 to 90 percent of the situations that normally arise in a facility. The primer provides a set of basic input files that you can selectively modify to fit the particular problem at hand
Criticality Calculations with MCNP6 - Practical Lectures
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications (XCP-3); Rising, Michael Evan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications (XCP-3); Alwin, Jennifer Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications (XCP-3)
2016-11-29
These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input model for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.
Criticality Calculations with MCNP6 - Practical Lectures
International Nuclear Information System (INIS)
Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise
2016-01-01
These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input model for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.
Calculations of IAEA-CRP-6 Benchmark Case 1 through 7 for a TRISO-Coated Fuel Particle
International Nuclear Information System (INIS)
Kim, Young Min; Lee, Y. W.; Chang, J. H.
2005-01-01
IAEA-CRP-6 is a coordinated research program of IAEA on Advances in HTGR fuel technology. The CRP examines aspects of HTGR fuel technology, ranging from design and fabrication to characterization, irradiation testing, performance modeling, as well as licensing and quality control issues. The benchmark section of the program treats simple analytical cases, pyrocarbon layer behavior, single TRISO-coated fuel particle behavior, and benchmark calculations of some irradiation experiments performed and planned. There are totally seventeen benchmark cases in the program. Member countries are participating in the benchmark calculations of the CRP with their own developed fuel performance analysis computer codes. Korea is also taking part in the benchmark calculations using a fuel performance analysis code, COPA (COated PArticle), which is being developed in Korea Atomic Energy Research Institute. The study shows the calculational results of IAEACRP- 6 benchmark cases 1 through 7 which describe the structural behaviors for a single fuel particle
OECD/NEA benchmark for time-dependent neutron transport calculations without spatial homogenization
Energy Technology Data Exchange (ETDEWEB)
Hou, Jason, E-mail: jason.hou@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Ivanov, Kostadin N. [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Boyarinov, Victor F.; Fomichenko, Peter A. [National Research Centre “Kurchatov Institute”, Kurchatov Sq. 1, Moscow (Russian Federation)
2017-06-15
Highlights: • A time-dependent homogenization-free neutron transport benchmark was created. • The first phase, known as the kinetics phase, was described in this work. • Preliminary results for selected 2-D transient exercises were presented. - Abstract: A Nuclear Energy Agency (NEA), Organization for Economic Co-operation and Development (OECD) benchmark for the time-dependent neutron transport calculations without spatial homogenization has been established in order to facilitate the development and assessment of numerical methods for solving the space-time neutron kinetics equations. The benchmark has been named the OECD/NEA C5G7-TD benchmark, and later extended with three consecutive phases each corresponding to one modelling stage of the multi-physics transient analysis of the nuclear reactor core. This paper provides a detailed introduction of the benchmark specification of Phase I, known as the “kinetics phase”, including the geometry description, supporting neutron transport data, transient scenarios in both two-dimensional (2-D) and three-dimensional (3-D) configurations, as well as the expected output parameters from the participants. Also presented are the preliminary results for the initial state 2-D core and selected transient exercises that have been obtained using the Monte Carlo method and the Surface Harmonic Method (SHM), respectively.
Energy Technology Data Exchange (ETDEWEB)
Lara, Rafael G.; Maiorino, Jose R., E-mail: rafael.lara@aluno.ufabc.edu.br, E-mail: joserubens.maiorino@ufabc.edu.br [Universidade Federal do ABC (UFABC), Santo Andre, SP (Brazil). Centro de Engenharia, Modelagem e Ciencias Sociais Aplicadas
2013-07-01
This work aimed at the implementation and qualification of MCNP code in a supercomputer of the Universidade Federal do ABC, so that may be available a next-generation simulation tool for precise calculations of nuclear reactors and systems subject to radiation. The implementation of this tool will have multidisciplinary applications, covering various areas of engineering (nuclear, aerospace, biomedical), radiation physics and others.
Some comments on cold hydrogenous moderators, simple synthetic kernels and benchmark calculations
International Nuclear Information System (INIS)
Dorning, J.
1997-09-01
The author comments on three general subjects which are not directly related, but which in his opinion are very relevant to the objectives of the workshop. The first of these is parahydrogen moderators, about which recurring questions have been raised during the Workshop. The second topic is related to the use of simple synthetic scattering kernels in conjunction with the neutron transport equation to carry out elementary mathematical analyses and simple computational analyses in order to understand the gross physics of time-dependent neutron transport initiated by pulsed sources in cold moderators. The third subject is that of 'simple' benchmark calculations by which is meant calculations that are simple compared to the very large scale combined spallation, slowing-down, thermalization calculations using MCNP and other large Monte Carlo codes. Such benchmark problems can be created so that they are closely related to both the geometric configuration and material composition of cold moderators of interest and still can be solved using steady-state deterministic transport codes to calculate the asymptotic time-decay constant, and the time-asymptotic energy spectrum of neutrons in the cold moderator and the spectrum of the cold neutrons leaking from it (neither of which should be expected to be Maxwellian in these small leakage-dominated systems). These would provide rather precise benchmark solutions against which the results of the large scale calculations carried out for the whole spallation, slowing-down, thermalization system -- for the same decoupled cold moderator -- could be compared.
Primer for criticality calculations with DANTSYS
International Nuclear Information System (INIS)
Busch, R.D.
1996-01-01
With the closure of many experimental facilities, the nuclear criticality safety analyst is increasingly required to rely on computer calculations to identify safe limits for the handling and storage of fissile materials. However, in many cases, the analyst has little experience with the specific codes available at his or her facility. Typically, two types of codes are available: deterministic codes such as ANISN or DANTSYS that solve an approximate model exactly and Monte Carlo Codes such as KENO or MCNP that solve an exact model approximately. Often, the analyst feels that the deterministic codes are too simple and will not provide the necessary information, so most modeling uses Monte Carlo methods. This sometimes means that hours of effort are expended to produce results available in minutes from deterministic codes. A substantial amount of reliable information on nuclear systems can be obtained using deterministic methods if the user understands their limitations. To guide criticality specialists in this area, the Nuclear Criticality Safety Group at the University of New Mexico in cooperation with the Radiation Transport Group at Los Alamos National Laboratory has designed a primer to help the analyst understand and use the DANTSYS deterministic transport code for nuclear criticality safety analyses. (DANTSYS is the name of a suite of codes that users more commonly know as ONEDANT, TWODANT, TWOHEX, and THREEDANT.) It assumes a college education in a technical field, but there is no assumption of familiarity with neutronics codes in general or with DANTSYS in particular. The primer is designed to teach by example, with each example illustrating two or three DANTSYS features useful in criticality analyses
Energy Technology Data Exchange (ETDEWEB)
Kahler, A.C.; Herman, M.; Kahler,A.C.; MacFarlane,R.E.; Mosteller,R.D.; Kiedrowski,B.C.; Frankle,S.C.; Chadwick,M.B.; McKnight,R.D.; Lell,R.M.; Palmiotti,G.; Hiruta,H.; Herman,M.; Arcilla,R.; Mughabghab,S.F.; Sublet,J.C.; Trkov,A.; Trumbull,T.H.; Dunn,M.
2011-12-01
The ENDF/B-VII.1 library is the latest revision to the United States Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., 'ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data,' Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected {sup 235}U and {sup 239}Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also
Energy Technology Data Exchange (ETDEWEB)
Kahler, A. [Los Alamos National Laboratory (LANL); Macfarlane, R E [Los Alamos National Laboratory (LANL); Mosteller, R D [Los Alamos National Laboratory (LANL); Kiedrowski, B C [Los Alamos National Laboratory (LANL); Frankle, S C [Los Alamos National Laboratory (LANL); Chadwick, M. B. [Los Alamos National Laboratory (LANL); Mcknight, R D [Argonne National Laboratory (ANL); Lell, R M [Argonne National Laboratory (ANL); Palmiotti, G [Idaho National Laboratory (INL); Hiruta, h [Idaho National Laboratory (INL); Herman, Micheal W [Brookhaven National Laboratory (BNL); Arcilla, r [Brookhaven National Laboratory (BNL); Mughabghab, S F [Brookhaven National Laboratory (BNL); Sublet, J C [Culham Science Center, Abington, UK; Trkov, A. [Jozef Stefan Institute, Slovenia; Trumbull, T H [Knolls Atomic Power Laboratory; Dunn, Michael E [ORNL
2011-01-01
The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [1]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unrnoderated and uranium reflected (235)U and (239)Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as (236)U; (238,242)Pu and (241,243)Am capture in fast systems. Other deficiencies, such as the overprediction of Pu solution system critical
Proposal of a benchmark for core burnup calculations for a VVER-1000 reactor core
International Nuclear Information System (INIS)
Loetsch, T.; Khalimonchuk, V.; Kuchin, A.
2009-01-01
In the framework of a project supported by the German BMU the code DYN3D should be further validated and verified. During the work a lack of a benchmark on core burnup calculations for VVER-1000 reactors was noticed. Such a benchmark is useful for validating and verifying the whole package of codes and data libraries for reactor physics calculations including fuel assembly modelling, fuel assembly data preparation, few group data parametrisation and reactor core modelling. The benchmark proposed specifies the core loading patterns of burnup cycles for a VVER-1000 reactor core as well as a set of operational data such as load follow, boron concentration in the coolant, cycle length, measured reactivity coefficients and power density distributions. The reactor core characteristics chosen for comparison and the first results obtained during the work with the reactor physics code DYN3D are presented. This work presents the continuation of efforts of the projects mentioned to estimate the accuracy of calculated characteristics of VVER-1000 reactor cores. In addition, the codes used for reactor physics calculations of safety related reactor core characteristics should be validated and verified for the cases in which they are to be used. This is significant for safety related evaluations and assessments carried out in the framework of licensing and supervision procedures in the field of reactor physics. (authors)
EA-MC Neutronic Calculations on IAEA ADS Benchmark 3.2
Energy Technology Data Exchange (ETDEWEB)
Dahlfors, Marcus [Uppsala Univ. (Sweden). Dept. of Radiation Sciences; Kadi, Yacine [CERN, Geneva (Switzerland). Emerging Energy Technologies
2006-01-15
The neutronics and the transmutation properties of the IAEA ADS benchmark 3.2 setup, the 'Yalina' experiment or ISTC project B-70, have been studied through an extensive amount of 3-D Monte Carlo calculations at CERN. The simulations were performed with the state-of-the-art computer code package EA-MC, developed at CERN. The calculational approach is outlined and the results are presented in accordance with the guidelines given in the benchmark description. A variety of experimental conditions and parameters are examined; three different fuel rod configurations and three types of neutron sources are applied to the system. Reactivity change effects introduced by removal of fuel rods in both central and peripheral positions are also computed. Irradiation samples located in a total of 8 geometrical positions are examined. Calculations of capture reaction rates in {sup 129}I, {sup 237}Np and {sup 243}Am samples and of fission reaction rates in {sup 235}U, {sup 237}Np and {sup 243}Am samples are presented. Simulated neutron flux densities and energy spectra as well as spectral indices inside experimental channels are also given according to benchmark specifications. Two different nuclear data libraries, JAR-95 and JENDL-3.2, are applied for the calculations.
Calculational benchmark comparisons for a low sodium void worth actinide burner core design
International Nuclear Information System (INIS)
Hill, R.N.; Kawashima, M.; Arie, K.; Suzuki, M.
1992-01-01
Recently, a number of low void worth core designs with non-conventional core geometries have been proposed. Since these designs lack a good experimental and computational database, benchmark calculations are useful for the identification of possible biases in performance characteristics predictions. In this paper, a simplified benchmark model of a metal fueled, low void worth actinide burner design is detailed; and two independent neutronic performance evaluations are compared. Calculated performance characteristics are evaluated for three spatially uniform compositions (fresh uranium/plutonium, batch-averaged uranium/transuranic, and batch-averaged uranium/transuranic with fission products) and a regional depleted distribution obtained from a benchmark depletion calculation. For each core composition, the flooded and voided multiplication factor, power peaking factor, sodium void worth (and its components), flooded Doppler coefficient and control rod worth predictions are compared. In addition, the burnup swing, average discharge burnup, peak linear power, and fresh fuel enrichment are calculated for the depletion case. In general, remarkably good agreement is observed between the evaluations. The most significant difference is predicted performance characteristics is a 0.3--0.5% Δk/(kk) bias in the sodium void worth. Significant differences in the transmutation rate of higher actinides are also observed; however, these differences do not cause discrepancies in the performing predictions
Monte Carlo method for array criticality calculations
International Nuclear Information System (INIS)
Dickinson, D.; Whitesides, G.E.
1976-01-01
The Monte Carlo method for solving neutron transport problems consists of mathematically tracing paths of individual neutrons collision by collision until they are lost by absorption or leakage. The fate of the neutron after each collision is determined by the probability distribution functions that are formed from the neutron cross-section data. These distributions are sampled statistically to establish the successive steps in the neutron's path. The resulting data, accumulated from following a large number of batches, are analyzed to give estimates of k/sub eff/ and other collision-related quantities. The use of electronic computers to produce the simulated neutron histories, initiated at Los Alamos Scientific Laboratory, made the use of the Monte Carlo method practical for many applications. In analog Monte Carlo simulation, the calculation follows the physical events of neutron scattering, absorption, and leakage. To increase calculational efficiency, modifications such as the use of statistical weights are introduced. The Monte Carlo method permits the use of a three-dimensional geometry description and a detailed cross-section representation. Some of the problems in using the method are the selection of the spatial distribution for the initial batch, the preparation of the geometry description for complex units, and the calculation of error estimates for region-dependent quantities such as fluxes. The Monte Carlo method is especially appropriate for criticality safety calculations since it permits an accurate representation of interacting units of fissile material. Dissimilar units, units of complex shape, moderators between units, and reflected arrays may be calculated. Monte Carlo results must be correlated with relevant experimental data, and caution must be used to ensure that a representative set of neutron histories is produced
International Nuclear Information System (INIS)
Carew, John F.; Finch, Stephen J.; Lois, Lambros
2003-01-01
The calculated >1-MeV pressure vessel fluence is used to determine the fracture toughness and integrity of the reactor pressure vessel. It is therefore of the utmost importance to ensure that the fluence prediction is accurate and unbiased. In practice, this assurance is provided by comparing the predictions of the calculational methodology with an extensive set of accurate benchmarks. A benchmarking database is used to provide an estimate of the overall average measurement-to-calculation (M/C) bias in the calculations ( ). This average is used as an ad-hoc multiplicative adjustment to the calculations to correct for the observed calculational bias. However, this average only provides a well-defined and valid adjustment of the fluence if the M/C data are homogeneous; i.e., the data are statistically independent and there is no correlation between subsets of M/C data.Typically, the identification of correlations between the errors in the database M/C values is difficult because the correlation is of the same magnitude as the random errors in the M/C data and varies substantially over the database. In this paper, an evaluation of a reactor dosimetry benchmark database is performed to determine the statistical validity of the adjustment to the calculated pressure vessel fluence. Physical mechanisms that could potentially introduce a correlation between the subsets of M/C ratios are identified and included in a multiple regression analysis of the M/C data. Rigorous statistical criteria are used to evaluate the homogeneity of the M/C data and determine the validity of the adjustment.For the database evaluated, the M/C data are found to be strongly correlated with dosimeter response threshold energy and dosimeter location (e.g., cavity versus in-vessel). It is shown that because of the inhomogeneity in the M/C data, for this database, the benchmark data do not provide a valid basis for adjusting the pressure vessel fluence.The statistical criteria and methods employed in
2010 Criticality Accident Alarm System Benchmark Experiments At The CEA Valduc SILENE Facility
International Nuclear Information System (INIS)
Miller, Thomas Martin; Dunn, Michael E.; Wagner, John C.; McMahan, Kimberly L.; Authier, Nicolas; Jacquet, Xavier; Rousseau, Guillaume; Wolff, Herve; Piot, Jerome; Savanier, Laurence; Baclet, Nathalie; Lee, Yi-kang; Masse, Veronique; Trama, Jean-Christophe; Gagnier, Emmanuel; Naury, Sylvie; Lenain, Richard; Hunter, Richard; Kim, Soon; Dulik, George Michael; Reynolds, Kevin H.
2011-01-01
Several experiments were performed at the CEA Valduc SILENE reactor facility, which are intended to be published as evaluated benchmark experiments in the ICSBEP Handbook. These evaluated benchmarks will be useful for the verification and validation of radiation transport codes and evaluated nuclear data, particularly those that are used in the analysis of CAASs. During these experiments SILENE was operated in pulsed mode in order to be representative of a criticality accident, which is rare among shielding benchmarks. Measurements of the neutron flux were made with neutron activation foils and measurements of photon doses were made with TLDs. Also unique to these experiments was the presence of several detectors used in actual CAASs, which allowed for the observation of their behavior during an actual critical pulse. This paper presents the preliminary measurement data currently available from these experiments. Also presented are comparisons of preliminary computational results with Scale and TRIPOLI-4 to the preliminary measurement data.
Validation of JENDL-3.3 by criticality benchmark testing
International Nuclear Information System (INIS)
Takano, Hideki; Nakagawa, Tsuneo
2001-01-01
In the thermal uranium core, the keff-values of STACY, TRACY and JRR-4 overestimated with JENDL-3.2 were improved significantly by decreasing of about 0.6% with JENDL-3.3. This is due to modification of the fission spectrum and thermal fission cross section of 235 U from JENDL-3.2 to JENDL-3.3 data. For the uranium fast cores, the discrepancies of keff values between JENDL-3.2 and 3.3 were very small. In the thermal Pu cores of TCA, the keff-values calculated with JENDL-3.3 were in good agreement with the experimental values. For Pu fuel cores of ZPPR-9 and FCA-XVII, the keff values calculated with JENDL-3.3 became larger 0.2% than those for JENDL-3.2. In small fast cores with U-233 fuel, the keff-values overestimated with JENDL-3.2 were improved considerably with JENDL-3.3, due to reevaluation of U-233 fission cross sections in the high energy region. (author)
VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4 - Revised Report
Energy Technology Data Exchange (ETDEWEB)
Ellis, RJ
2001-06-01
The Task Force on Reactor-Based Plutonium Disposition (TFRPD) was formed by the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) to study reactor physics, fuel performance, and fuel cycle issues related to the disposition of weapons-grade (WG) plutonium as mixed-oxide (MOX) reactor fuel. To advance the goals of the TFRPD, 10 countries and 12 institutions participated in a major TFRPD activity: a blind benchmark study to compare code calculations to experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At Oak Ridge National Laboratory, the HELIOS-1.4 code system was used to perform the comprehensive study of pin-cell and MOX core calculations for the VENUS-2 MOX core benchmark study.
Two-dimensional benchmark calculations for PNL-30 through PNL-35
International Nuclear Information System (INIS)
Mosteller, R.D.
1997-01-01
Interest in critical experiments with lattices of mixed-oxide (MOX) fuel pins has been revived by the possibility that light water reactors will be used for disposition of weapons-grade plutonium. A series of six experiments with MOX lattices, designated PNL-30 through PNL-35, was performed at Pacific Northwest Laboratories in 1975 and 1976, and a set of benchmark specifications for these experiments subsequently was adopted by the Cross Section Evaluation Working Group (CSEWG). Although there appear to be some problems with these experiments, they remain the only CSEWG benchmarks for MOX lattices. The number of fuel pins in these experiments is relatively low, corresponding to fewer than 4 typical pressurized-water-reactor fuel assemblies. Accordingly, they are more appropriate as benchmarks for lattice-physics codes than for reactor-core simulator codes. Unfortunately, the CSEWG specifications retain the full three-dimensional (3D) detail of the experiments, while lattice-physics codes almost universally are limited to two dimensions (2D). This paper proposes an extension of the benchmark specifications to include a 2D model, and it justifies that extension by comparing results from the MCNP Monte Carlo code for the 2D and 3D specifications
Criticality Analysis Of TCA Critical Lattices With MNCP-4C Monte Carlo Calculation
International Nuclear Information System (INIS)
Zuhair
2002-01-01
The use of uranium-plutonium mixed oxide (MOX) fuel in electric generation light water reactor (PWR, BWR) is being planned in Japan. Therefore, the accuracy evaluations of neutronic analysis code for MOX cores have been employed by many scientists and reactor physicists. Benchmark evaluations for TCA was done using various calculation methods. The Monte Carlo become the most reliable method to predict criticality of various reactor types. In this analysis, the MCNP-4C code was chosen because various superiorities the code has. All in all, the MCNP-4C calculation for TCA core with 38 MOX critical lattice configurations gave the results with high accuracy. The JENDL-3.2 library showed significantly closer results to the ENDF/B-V. The k eff values calculated with the ENDF/B-VI library gave underestimated results. The ENDF/B-V library gave the best estimation. It can be concluded that MCNP-4C calculation, especially with ENDF/B-V and JENDL-3.2 libraries, for MOX fuel utilized NPP design in reactor core is the best choice
Neutron transport calculations of some fast critical assemblies
Energy Technology Data Exchange (ETDEWEB)
Martinez-Val Penalosa, J A
1976-07-01
To analyse the influence of the input variables of the transport codes upon the neutronic results (eigenvalues, generation times, . . . ) four Benchmark calculations have been performed. Sensitivity analysis have been applied to express these dependences in a useful way, and also to get an unavoidable experience to carry out calculations achieving the required accuracy and doing them in practical computing times. (Author) 29 refs.
Neutron transport calculations of some fast critical assemblies
International Nuclear Information System (INIS)
Martinez-Val Penalosa, J. A.
1976-01-01
To analyse the influence of the input variables of the transport codes upon the neutronic results (eigenvalues, generation times, . . . ) four Benchmark calculations have been performed. Sensitivity analysis have been applied to express these dependences in a useful way, and also to get an unavoidable experience to carry out calculations achieving the required accuracy and doing them in practical computing times. (Author) 29 refs
International Nuclear Information System (INIS)
Mitake, Susumu
2003-01-01
Validation of the continuous-energy Monte Carlo criticality-safety analysis system, comprising the MVP code and neutron cross sections based on JENDL-3.2, was examined using benchmarks evaluated in the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments'. Eight experiments (116 configurations) for the plutonium solution and plutonium-uranium mixture systems performed at Valduc, Battelle Pacific Northwest Laboratories, and other facilities were selected and used in the studies. The averaged multiplication factors calculated with MVP and MCNP-4B using the same neutron cross-section libraries based on JENDL-3.2 were in good agreement. Based on methods provided in the Japanese nuclear criticality-safety handbook, the estimated criticality lower-limit multiplication factors to be used as a subcriticality criterion for the criticality-safety evaluation of nuclear facilities were obtained. The analysis proved the applicability of the MVP code to the criticality-safety analysis of nuclear fuel facilities, particularly to the analysis of systems fueled with plutonium and in homogeneous and thermal-energy conditions
Depletion benchmarks calculation of random media using explicit modeling approach of RMC
International Nuclear Information System (INIS)
Liu, Shichang; She, Ding; Liang, Jin-gang; Wang, Kan
2016-01-01
Highlights: • Explicit modeling of RMC is applied to depletion benchmark for HTGR fuel element. • Explicit modeling can provide detailed burnup distribution and burnup heterogeneity. • The results would serve as a supplement for the HTGR fuel depletion benchmark. • The method of adjacent burnup regions combination is proposed for full-core problems. • The combination method can reduce memory footprint, keeping the computing accuracy. - Abstract: Monte Carlo method plays an important role in accurate simulation of random media, owing to its advantages of the flexible geometry modeling and the use of continuous-energy nuclear cross sections. Three stochastic geometry modeling methods including Random Lattice Method, Chord Length Sampling and explicit modeling approach with mesh acceleration technique, have been implemented in RMC to simulate the particle transport in the dispersed fuels, in which the explicit modeling method is regarded as the best choice. In this paper, the explicit modeling method is applied to the depletion benchmark for HTGR fuel element, and the method of combination of adjacent burnup regions has been proposed and investigated. The results show that the explicit modeling can provide detailed burnup distribution of individual TRISO particles, and this work would serve as a supplement for the HTGR fuel depletion benchmark calculations. The combination of adjacent burnup regions can effectively reduce the memory footprint while keeping the computational accuracy.
International Nuclear Information System (INIS)
Peneliau, Y.; Litaize, O.; Archier, P.; De Saint Jean, C.
2014-01-01
A large set of nuclear data are investigated to improve the calculation predictions of the new neutron transport simulation codes. With the next generation of nuclear power plants (GEN IV projects), one expects to reduce the calculated uncertainties which are mainly coming from nuclear data and are still very important, before taking into account integral information in the adjustment process. In France, future nuclear power plant concepts will probably use MOX fuel, either in Sodium Fast Reactors or in Gas Cooled Fast Reactors. Consequently, the knowledge of 239 Pu cross sections and other nuclear data is crucial issue in order to reduce these sources of uncertainty. The Prompt Fission Neutron Spectra (PFNS) for 239 Pu are part of these relevant data (an IAEA working group is even dedicated to PFNS) and the work presented here deals with this particular topic. The main international data files (i.e. JEFF-3.1.1, ENDF/B-VII.0, JENDL-4.0, BRC-2009) have been considered and compared with two different spectra, coming from the works of Maslov and Kornilov respectively. The spectra are first compared by calculating their mathematical moments in order to characterize them. Then, a reference calculation using the whole JEFF-3.1.1 evaluation file is performed and compared with another calculation performed with a new evaluation file, in which the data block containing the fission spectra (MF=5, MT=18) is replaced by the investigated spectra (one for each evaluation). A set of benchmarks is used to analyze the effects of PFNS, covering criticality cases and mock-up cases in various neutron flux spectra (thermal, intermediate, and fast flux spectra). Data coming from many ICSBEP experiments are used (PU-SOL-THERM, PU-MET-FAST, PU-MET-INTER and PU-MET-MIXED) and French mock-up experiments are also investigated (EOLE for thermal neutron flux spectrum and MASURCA for fast neutron flux spectrum). This study shows that many experiments and neutron parameters are very sensitive to
Lutnaes, Ola B; Teale, Andrew M; Helgaker, Trygve; Tozer, David J; Ruud, Kenneth; Gauss, Jürgen
2009-10-14
An accurate set of benchmark rotational g tensors and magnetizabilities are calculated using coupled-cluster singles-doubles (CCSD) theory and coupled-cluster single-doubles-perturbative-triples [CCSD(T)] theory, in a variety of basis sets consisting of (rotational) London atomic orbitals. The accuracy of the results obtained is established for the rotational g tensors by careful comparison with experimental data, taking into account zero-point vibrational corrections. After an analysis of the basis sets employed, extrapolation techniques are used to provide estimates of the basis-set-limit quantities, thereby establishing an accurate benchmark data set. The utility of the data set is demonstrated by examining a wide variety of density functionals for the calculation of these properties. None of the density-functional methods are competitive with the CCSD or CCSD(T) methods. The need for a careful consideration of vibrational effects is clearly illustrated. Finally, the pure coupled-cluster results are compared with the results of density-functional calculations constrained to give the same electronic density. The importance of current dependence in exchange-correlation functionals is discussed in light of this comparison.
International Nuclear Information System (INIS)
Hadek, J.
1999-01-01
The paper gives a brief survey of the fifth three-dimensional dynamic Atomic Energy Research benchmark calculation results received with the code DYN3D/ATHLET at NRI Rez. This benchmark was defined at the seventh Atomic Energy Research Symposium (Hoernitz near Zittau, 1997). Its initiating event is a symmetrical break of the main steam header at the end of the first fuel cycle and hot shutdown conditions with one stuck out control rod group. The calculations were performed with the externally coupled codes ATHLET Mod.1.1 Cycle C and DYN3DH1.1/M3. The standard WWER-440/213 input deck of ATHLET code was adopted for benchmark purposes and for coupling with the code DYN3D. The first part of paper contains a brief characteristics of NPP input deck and reactor core model. The second part shows the time dependencies of important global and local parameters. In comparison with the results published at the eighth Atomic Energy Research Symposium (Bystrice nad Pernstejnem, 1998), the results published in this paper are based on improved ATHLET descriptions of control and safety systems. (Author)
Update of KASHIL-E6 library for shielding analysis and benchmark calculations
International Nuclear Information System (INIS)
Kim, D. H.; Kil, C. S.; Jang, J. H.
2004-01-01
For various shielding and reactor pressure vessel dosimetry applications, a pseudo-problem-independent neutron-photon coupled MATXS-format library based on the last release of ENDF/B-VI has been generated as a part of the update program for KASHIL-E6, which was based on ENDF/B-VI.5. It has VITAMIN-B6 neutron and photon energy group structures, i.e., 199 groups for neutron and 42 groups for photon. The neutron and photon weighting functions and the Legendre order of scattering are same as KASHIL-E6. The library has been validated through some benchmarks: the PCA-REPLICA and NESDIP-2 experiments for LWR pressure vessel facility benchmark, the Winfrith Iron88 experiment for validation of iron data, and the Winfrith Graphite experiment for validation of graphite data. These calculations were performed by the TRANSXlDANTSYS code system. In addition, the substitutions of the JENDL-3.3 and JEFF-3.0 data for Fe, Cr, Cu and Ni, which are very important nuclides for shielding analyses, were investigated to estimate the effects on the benchmark calculation results
Energy Technology Data Exchange (ETDEWEB)
Freudenreich, W.E.; Gruppelaar, H
1998-12-01
This report contains the results of calculations made at ECN-Petten of a benchmark to study the neutronic potential of a modular fast spectrum ADS (Accelerator-Driven System) for radiotoxic waste transmutation. The study is focused on the incineration of TRans-Uranium elements (TRU), Minor Actinides (MA) and Long-Lived Fission Products (LLFP), in this case {sup 99}Tc. The benchmark exercise is made in the framework of an IAEA Co-ordinated Research Programme. A simplified description of an ADS, restricted to the reactor part, with TRU or MA fuel (k{sub eff}=0.96) has been analysed. All spectrum calculations have been performed with the Monte Carlo code MCNP-4A. The burnup calculations have been performed with the code FISPACT coupled to MCNP-4A by means of our OCTOPUS system. The cross sections are based upon JEF-2.2 for transport calculations and supplemented with EAF-4 data for inventory calculations. The determined quantities are: core dimensions, fuel inventories, system power, sensitivity on external source spectrum and waste transmutation rates. The main conclusions are: The MA-burner requires only a small accelerator current increase during burnup, in contrast to the TRU-burner. The {sup 99} Tc-burner has a large initial loading; a more effective design may be possible. 5 refs.
International Nuclear Information System (INIS)
Freudenreich, W.E.; Gruppelaar, H.
1998-12-01
This report contains the results of calculations made at ECN-Petten of a benchmark to study the neutronic potential of a modular fast spectrum ADS (Accelerator-Driven System) for radiotoxic waste transmutation. The study is focused on the incineration of TRans-Uranium elements (TRU), Minor Actinides (MA) and Long-Lived Fission Products (LLFP), in this case 99 Tc. The benchmark exercise is made in the framework of an IAEA Co-ordinated Research Programme. A simplified description of an ADS, restricted to the reactor part, with TRU or MA fuel (k eff =0.96) has been analysed. All spectrum calculations have been performed with the Monte Carlo code MCNP-4A. The burnup calculations have been performed with the code FISPACT coupled to MCNP-4A by means of our OCTOPUS system. The cross sections are based upon JEF-2.2 for transport calculations and supplemented with EAF-4 data for inventory calculations. The determined quantities are: core dimensions, fuel inventories, system power, sensitivity on external source spectrum and waste transmutation rates. The main conclusions are: The MA-burner requires only a small accelerator current increase during burnup, in contrast to the TRU-burner. The 99 Tc-burner has a large initial loading; a more effective design may be possible. 5 refs
International Nuclear Information System (INIS)
Bock, M.; Stuke, M.; Behler, M.
2013-01-01
The validation of a code for criticality safety analysis requires the recalculation of benchmark experiments. The selected benchmark experiments are chosen such that they have properties similar to the application case that has to be assessed. A common source of benchmark experiments is the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments' (ICSBEP Handbook) compiled by the 'International Criticality Safety Benchmark Evaluation Project' (ICSBEP). In order to take full advantage of the information provided by the individual benchmark descriptions for the application case, the recommended procedure is to perform an uncertainty analysis. The latter is based on the uncertainties of experimental results included in most of the benchmark descriptions. They can be performed by means of the Monte Carlo sampling technique. The consideration of uncertainties is also being introduced in the supplementary sheet of DIN 25478 'Application of computer codes in the assessment of criticality safety'. However, for a correct treatment of uncertainties taking into account the individual uncertainties of the benchmark experiments is insufficient. In addition, correlations between benchmark experiments have to be handled correctly. For example, these correlations can arise due to different cases of a benchmark experiment sharing the same components like fuel pins or fissile solutions. Thus, manufacturing tolerances of these components (e.g. diameter of the fuel pellets) have to be considered in a consistent manner in all cases of the benchmark experiment. At the 2012 meeting of the Expert Group on 'Uncertainty Analysis for Criticality Safety Assessment' (UACSA) of the OECD/NEA a benchmark proposal was outlined that aimed for the determination of the impact on benchmark correlations on the estimation of the computational bias of the neutron multiplication factor (k eff ). The analysis presented here is based on this proposal. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Primm III, RT
2002-05-29
This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the US during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the computational benchmarks and for those experimental benchmarks that the US and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.
International Nuclear Information System (INIS)
Primm III, RT
2002-01-01
This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the US during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the computational benchmarks and for those experimental benchmarks that the US and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors
International Nuclear Information System (INIS)
Yang Wankui; Liu Yaoguang; Ma Jimin; Yang Xin; Wang Guanbo
2014-01-01
MCBMPI, a parallelized burnup calculation program, was developed. The program is modularized. Neutron transport calculation module employs the parallelized MCNP5 program MCNP5MPI, and burnup calculation module employs ORIGEN2, with the MPI parallel zone decomposition strategy. The program system only consists of MCNP5MPI and an interface subroutine. The interface subroutine achieves three main functions, i.e. zone decomposition, nuclide transferring and decaying, data exchanging with MCNP5MPI. Also, the program was verified with the Pressurized Water Reactor (PWR) cell burnup benchmark, the results showed that it's capable to apply the program to burnup calculation of multiple zones, and the computation efficiency could be significantly improved with the development of computer hardware. (authors)
Hextran-Smabre calculation of the VVER-1000 coolant transient benchmark
Energy Technology Data Exchange (ETDEWEB)
Elina Syrjaelahti; Anitta Haemaelaeinen [VTT Processes, P.O.Box 1604, FIN-02044 VTT (Finland)
2005-07-01
Full text of publication follows: The VVER-1000 Coolant Transient benchmark is intended for validation of couplings of the thermal hydraulic codes and three dimensional neutron kinetic core models. It concerns a switching on a main coolant pump when the other three main coolant pumps are in operation. Problem is based on experiment performed in Kozloduy NPP in Bulgaria. In addition to the real plant transient, two extreme scenarios concerning control rod ejection after switching on a main coolant pump were calculated. In VTT the three-dimensional advanced nodal code HEXTRAN is used for the core kinetics and dynamics, and thermohydraulic system code SMABRE as a thermal hydraulic model for the primary and secondary loop. Parallelly coupled HEXTRAN-SMABRE code has been in production use since early 90's, and it has been extensively used for analysis of VVER NPPs. The SMABRE input model is based on the standard VVER-1000 input used in VTT. Last plant specific modifications to the input model have been made in EU projects. The whole core calculation is performed in the core with HEXTRAN. Also the core model is based on earlier VVER-1000 models. Nuclear data for the calculation was specified in the benchmark. The paper outlines the input models used for both codes. Calculated results are introduced both for the coupled core system with inlet and outlet boundary conditions and for the whole plant model. Sensitivity studies have been performed for selected parameters. (authors)
Hextran-Smabre calculation of the VVER-1000 coolant transient benchmark
International Nuclear Information System (INIS)
Elina Syrjaelahti; Anitta Haemaelaeinen
2005-01-01
Full text of publication follows: The VVER-1000 Coolant Transient benchmark is intended for validation of couplings of the thermal hydraulic codes and three dimensional neutron kinetic core models. It concerns a switching on a main coolant pump when the other three main coolant pumps are in operation. Problem is based on experiment performed in Kozloduy NPP in Bulgaria. In addition to the real plant transient, two extreme scenarios concerning control rod ejection after switching on a main coolant pump were calculated. In VTT the three-dimensional advanced nodal code HEXTRAN is used for the core kinetics and dynamics, and thermohydraulic system code SMABRE as a thermal hydraulic model for the primary and secondary loop. Parallelly coupled HEXTRAN-SMABRE code has been in production use since early 90's, and it has been extensively used for analysis of VVER NPPs. The SMABRE input model is based on the standard VVER-1000 input used in VTT. Last plant specific modifications to the input model have been made in EU projects. The whole core calculation is performed in the core with HEXTRAN. Also the core model is based on earlier VVER-1000 models. Nuclear data for the calculation was specified in the benchmark. The paper outlines the input models used for both codes. Calculated results are introduced both for the coupled core system with inlet and outlet boundary conditions and for the whole plant model. Sensitivity studies have been performed for selected parameters. (authors)
International Nuclear Information System (INIS)
Santamarina, A.
1991-01-01
A criticality-safety calculational scheme using the automated deterministic code system, APOLLO-BISTRO, has been developed. The cell/assembly code APOLLO is used mainly in LWR and HCR design calculations, and its validation spans a wide range of moderation ratios, including voided configurations. Its recent 99-group library and self-shielded cross-sections has been extensively qualified through critical experiments and PWR spent fuel analysis. The PIC self-shielding formalism enables a rigorous treatment of the fuel double heterogeneity in dissolver medium calculations. BISTRO is an optimized multidimensional SN code, part of the modular CCRR package used mainly in FBR calculations. The APOLLO-BISTRO scheme was applied to the 18 experimental benchmarks selected by the OECD/NEACRP Criticality Calculation Working Group. The Calculation-Experiment discrepancy was within ± 1% in ΔK/K and always looked consistent with the experimental uncertainty margin. In the critical experiments corresponding to a dissolver type benchmark, our tools computed a satisfactory Keff. In the VALDUC fuel storage experiments, with hafnium plates, the computed Keff ranged between 0.994 and 1.003 for the various watergaps spacing the fuel clusters from the absorber plates. The APOLLO-KENOEUR statistic calculational scheme, based on the same self-shielded multigroup library, supplied consistent results within 0.3% in ΔK/K. (Author)
Utilization of Keno system for criticality calculation
International Nuclear Information System (INIS)
Maragni, M.G.
1990-01-01
Several studies involving benchmarks have been performed with the KENO-IV code in order to utilize it in a more efficient way at IPEN-COPESP. The influence of different cross section libraries has been verifed. The Hansen-Roach library produced better results for fast systems, while GAMTEC-II code was more efficient for thermal systems. For reflectors it has been shown that the differential albedo and automatic reflection options are more appropriate for infinite and finite reflectors, respectively. A number of histories greater than 30.000 did not seem to improve the results. Plutonium systems should be treated with special care. (author) [pt
VERA Pin and Fuel Assembly Depletion Benchmark Calculations by McCARD and DeCART
Energy Technology Data Exchange (ETDEWEB)
Park, Ho Jin; Cho, Jin Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2016-10-15
Monte Carlo (MC) codes have been developed and used to simulate a neutron transport since MC method was devised in the Manhattan project. Solving the neutron transport problem with the MC method is simple and straightforward to understand. Because there are few essential approximations for the 6- dimension phase of a neutron such as the location, energy, and direction in MC calculations, highly accurate solutions can be obtained through such calculations. In this work, the VERA pin and fuel assembly (FA) depletion benchmark calculations are performed to examine the depletion capability of the newly generated DeCART multi-group cross section library. To obtain the reference solutions, MC depletion calculations are conducted using McCARD. Moreover, to scrutinize the effect by stochastic uncertainty propagation, uncertainty propagation analyses are performed using a sensitivity and uncertainty (S/U) analysis method and stochastic sampling (S.S) method. It is still expensive and challenging to perform a depletion analysis by a MC code. Nevertheless, many studies and works for a MC depletion analysis have been conducted to utilize the benefits of the MC method. In this study, McCARD MC and DeCART MOC transport calculations are performed for the VERA pin and FA depletion benchmarks. The DeCART depletion calculations are conducted to examine the depletion capability of the newly generated multi-group cross section library. The DeCART depletion calculations give excellent agreement with the McCARD reference one. From the McCARD results, it is observed that the MC depletion results depend on how to split the burnup interval. First, only to quantify the effect of the stochastic uncertainty propagation at 40 DTS, the uncertainty propagation analyses are performed using the S/U and S.S. method.
Directory of Open Access Journals (Sweden)
Kabach Ouadie
2017-12-01
Full Text Available To validate the new Evaluated Nuclear Data File (ENDF/B-VIII.0β4 library, 31 different critical cores were selected and used for a benchmark test of the important parameter keff. The four utilized libraries are processed using Nuclear Data Processing Code (NJOY2016. The results obtained with the ENDF/B-VIII.0β4 library were compared against those calculated with ENDF/B-VI.8, ENDF/B-VII.0, and ENDF/B-VII.1 libraries using the Monte Carlo N-Particle (MCNP(X code. All the MCNP(X calculations of keff values with these four libraries were compared with the experimentally measured results, which are available in the International Critically Safety Benchmark Evaluation Project. The obtained results are discussed and analyzed in this paper.
WWER-1000 Burnup Credit Benchmark (CB5)
International Nuclear Information System (INIS)
Manolova, M.A.
2002-01-01
In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)
A benchmark test of computer codes for calculating average resonance parameters
International Nuclear Information System (INIS)
Ribon, P.; Thompson, A.
1983-01-01
A set of resonance parameters has been generated from known, but secret, average values; the parameters have then been adjusted to mimic experimental data by including the effects of Doppler broadening, resolution broadening and statistical fluctuations. Average parameters calculated from the dataset by various computer codes are compared with each other, and also with the true values. The benchmark test is fully described in the report NEANDC160-U (NEA Data Bank Newsletter No. 27 July 1982); the present paper is a summary of this document. (Auth.)
Calculations to an IAHR-benchmark test using the CFD-code CFX-4
Energy Technology Data Exchange (ETDEWEB)
Krepper, E
1998-10-01
The calculation concerns a test, which was defined as a benchmark for 3-D codes by the working group of advanced nuclear reactor types of IAHR (International Association of Hydraulic Research). The test is well documented and detailed measuring results are available. The test aims at the investigation of phenomena, which are important for heat removal at natural circulation conditions in a nuclear reactor. The task for the calculation was the modelling of the forced flow field of a single phase incompressible fluid with consideration of heat transfer and influence of gravity. These phenomena are typical also for other industrial processes. The importance of correct modelling of these phenomena also for other applications is a motivation for performing these calculations. (orig.)
Validation of new 240Pu cross section and covariance data via criticality calculation
International Nuclear Information System (INIS)
Kim, Do Heon; Gil, Choong-Sup; Kim, Hyeong Il; Lee, Young-Ouk; Leal, Luiz C.; Dunn, Michael E.
2011-01-01
Recent collaboration between KAERI and ORNL has completed an evaluation for 240 Pu neutron cross section with covariance data. The new 240 Pu cross section data has been validated through 28 criticality safety benchmark problems taken from the ICSBEP and/or CSEWG specifications with MCNP calculations. The calculation results based on the new evaluation have been compared with those based on recent evaluations such as ENDF/B-VII.0, JEFF-3.1.1, and JENDL-4.0. In addition, the new 240 Pu covariance data has been tested for some criticality benchmarks via the DANTSYS/SUSD3D-based nuclear data sensitivity and uncertainty analysis of k eff . The k eff uncertainty estimates by the new covariance data has been compared with those by JENDL-4.0, JENDL-3.3, and Low-Fidelity covariance data. (author)
RB reactor as the U-D2O benchmark criticality system
International Nuclear Information System (INIS)
Pesic, M.
1998-01-01
From a rich and valuable database fro 580 different reactor cores formed up to now in the RB nuclear reactor, a selected and well recorded set is carefully chosen and preliminarily proposed as a new uranium-heavy water benchmark criticality system for validation od reactor design computer codes and data libraries. The first results of validation of the MCNP code and adjoining neutron cross section libraries are resented in this paper. (author)
Benchmark evaluation of the RELAP code to calculate boiling in narrow channels
International Nuclear Information System (INIS)
Kunze, J.F.; Loyalka, S.K.; McKibben, J.C.; Hultsch, R.; Oladiran, O.
1990-01-01
The RELAP code has been tested with benchmark experiments (such as the loss-of-fluid test experiments at the Idaho National Engineering Laboratory) at high pressures and temperatures characteristic of those encountered in loss-of-coolant accidents (LOCAs) in commercial light water power reactors. Application of RELAP to the LOCA analysis of a low pressure (< 7 atm) and low temperature (< 100 degree C), plate-type research reactor, such as the University of Missouri Research Reactor (MURR), the high-flux breeder reactor, high-flux isotope reactor, and Advanced Test Reactor, requires resolution of questions involving overextrapolation to very low pressures and low temperatures, and calculations of the pulsed boiling/reflood conditions in the narrow rectangular cross-section channels (typically 2 mm thick) of the plate fuel elements. The practical concern of this problem is that plate fuel temperatures predicted by RELAP5 (MOD2, version 3) during the pulsed boiling period can reach high enough temperatures to cause plate (clad) weakening, though not melting. Since an experimental benchmark of RELAP under such LOCA conditions is not available and since such conditions present substantial challenges to the code, it is important to verify the code predictions. The comparison of the pulsed boiling experiments with the RELAP calculations involves both visual observations of void fraction versus time and measurements of temperatures near the fuel plate surface
Benchmark calculation for GT-MHR using HELIOS/MASTER code package and MCNP
International Nuclear Information System (INIS)
Lee, Kyung Hoon; Kim, Kang Seog; Noh, Jae Man; Song, Jae Seung; Zee, Sung Quun
2005-01-01
The latest research associated with the very high temperature gas-cooled reactor (VHTR) is focused on the verification of a system performance and safety under operating conditions for the VHTRs. As a part of those, an international gas-cooled reactor program initiated by IAEA is going on. The key objectives of this program are the validation of analytical computer codes and the evaluation of benchmark models for the projected and actual VHTRs. New reactor physics analysis procedure for the prismatic VHTR is under development by adopting the conventional two-step procedure. In this procedure, a few group constants are generated through the transport lattice calculations using the HELIOS code, and the core physics analysis is performed by the 3-dimensional nodal diffusion code MASTER. We evaluated the performance of the HELIOS/MASTER code package through the benchmark calculations related to the GT-MHR (Gas Turbine-Modular Helium Reactor) to dispose weapon plutonium. In parallel, MCNP is employed as a reference code to verify the results of the HELIOS/MASTER procedure
Criticality calculation by the LTSN method
International Nuclear Information System (INIS)
Batistela, Claudia H.F.; Vilhena, Marco T. de; Borges, Volnei
1997-01-01
This work evaluates criticality parameters (multiplication factor and critical thickness) by the LTS N method in unidimensional slabs homogeneous and heterogeneous considering one-group model and isotropic scattering. The idea of the LTS N method encompasses the following steps: application of the Laplace transform into a set of discrete ordinates equations, analytical solution of the algebraic linear system for the transformed angular fluxes and their reconstruction by the Heaviside expansion technique. The novel feature of the proposed method is based upon the criticality parameters determination by solving a transcendental equation. Numerical results are reported. 12 refs., 2 tabs
International Nuclear Information System (INIS)
Yu, Rong Mei; Zan, Li Rong; Jiao, Li Guang; Ho, Yew Kam
2017-01-01
Spatially confined atoms have been extensively investigated to model atomic systems in extreme pressures. For the simplest hydrogen-like atoms and isotropic harmonic oscillators, numerous physical quantities have been established with very high accuracy. However, the expectation value of which is of practical importance in many applications has significant discrepancies among calculations by different methods. In this work we employed the basis expansion method with cut-off Slater-type orbitals to investigate these two confined systems. Accurate values for several low-lying bound states were obtained by carefully examining the convergence with respect to the size of basis. A scaling law for was derived and it is used to verify the accuracy of numerical results. Comparison with other calculations show that the present results establish benchmark values for this quantity, which may be useful in future studies. (author)
Energy Technology Data Exchange (ETDEWEB)
Renner, Franziska [Physikalisch-Technische Bundesanstalt (PTB), Braunschweig (Germany)
2016-11-01
Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide.
Benchmark calculation of APOLLO-2 and SLAROM-UF in a fast reactor lattice
International Nuclear Information System (INIS)
Hazama, T.
2009-07-01
A lattice cell benchmark calculation is carried out for APOLLO2 and SLAROM-UF on the infinite lattice of a simple pin cell featuring a fast reactor. The accuracy in k-infinity and reaction rates is investigated in their reference and standard level calculations. In the 1. reference level calculation, APOLLO2 and SLAROM-UF agree with the reference value of k-infinity obtained by a continuous energy Monte Carlo calculation within 50 pcm. However, larger errors are observed in a particular reaction rate and energy range. The major problem common to both codes is in the cross section library of 239 Pu in the unresolved energy range. In the 2. reference level calculation, which is based on the ECCO 1968 group structure, both results of k-infinity agree with the reference value within 100 pcm. The resonance overlap effect is observed by several percents in cross sections of heavy nuclides. In the standard level calculation based on the APOLLO2 library creation methodology, a discrepancy appears by more than 300 pcm. A restriction is revealed in APOLLO2. Its standard cross section library does not have a sufficiently small background cross section to evaluate the self shielding effect on 56 Fe cross sections. The restriction can be removed by introducing the mixture self-shielding treatment recently introduced to APOLLO2. SLAROM-UF original standard level calculation based on the JFS-3 library creation methodology is the best among the standard level calculations. Improvement from the SLAROM-UF standard level calculation is achieved mainly by use of a proper weight function for light or intermediate nuclides. (author)
Critical Assessment of Metagenome Interpretation-a benchmark of metagenomics software.
Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter; Koslicki, David; Janssen, Stefan; Dröge, Johannes; Gregor, Ivan; Majda, Stephan; Fiedler, Jessika; Dahms, Eik; Bremges, Andreas; Fritz, Adrian; Garrido-Oter, Ruben; Jørgensen, Tue Sparholt; Shapiro, Nicole; Blood, Philip D; Gurevich, Alexey; Bai, Yang; Turaev, Dmitrij; DeMaere, Matthew Z; Chikhi, Rayan; Nagarajan, Niranjan; Quince, Christopher; Meyer, Fernando; Balvočiūtė, Monika; Hansen, Lars Hestbjerg; Sørensen, Søren J; Chia, Burton K H; Denis, Bertrand; Froula, Jeff L; Wang, Zhong; Egan, Robert; Don Kang, Dongwan; Cook, Jeffrey J; Deltel, Charles; Beckstette, Michael; Lemaitre, Claire; Peterlongo, Pierre; Rizk, Guillaume; Lavenier, Dominique; Wu, Yu-Wei; Singer, Steven W; Jain, Chirag; Strous, Marc; Klingenberg, Heiner; Meinicke, Peter; Barton, Michael D; Lingner, Thomas; Lin, Hsin-Hung; Liao, Yu-Chieh; Silva, Genivaldo Gueiros Z; Cuevas, Daniel A; Edwards, Robert A; Saha, Surya; Piro, Vitor C; Renard, Bernhard Y; Pop, Mihai; Klenk, Hans-Peter; Göker, Markus; Kyrpides, Nikos C; Woyke, Tanja; Vorholt, Julia A; Schulze-Lefert, Paul; Rubin, Edward M; Darling, Aaron E; Rattei, Thomas; McHardy, Alice C
2017-11-01
Methods for assembly, taxonomic profiling and binning are key to interpreting metagenome data, but a lack of consensus about benchmarking complicates performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on highly complex and realistic data sets, generated from ∼700 newly sequenced microorganisms and ∼600 novel viruses and plasmids and representing common experimental setups. Assembly and genome binning programs performed well for species represented by individual genomes but were substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below family level. Parameter settings markedly affected performance, underscoring their importance for program reproducibility. The CAMI results highlight current challenges but also provide a roadmap for software selection to answer specific research questions.
Influence of the ab initio n–d cross sections in the critical heavy-water benchmarks
International Nuclear Information System (INIS)
Morillon, B.; Lazauskas, R.; Carbonell, J.
2013-01-01
Highlights: ► We solve the three nucleon problem using different NN potential (MT, AV18 and INOY) to calculate the Neutron–deuteron cross sections. ► These cross sections are compared to the existing experimental data and to international libraries. ► We describe the different sets of heavy water benchmarks for which the Monte Carlo simulations have been performed including our new Neutron–deuteron cross sections. ► The results obtained by the ab initio INOY potential have been compared with the calculations based on the international library cross sections and are found to be of the same quality. - Abstract: The n–d elastic and breakup cross sections are computed by solving the three-body Faddeev equations for realistic and semi-realistic nucleon–nucleon potentials. These cross sections are inserted in the Monte Carlo simulation of the nuclear processes considered in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook). The results obtained using thes ab initio n–d cross sections are compared with those provided by the most renown international libraries
Criticality calculation of non-ordinary systems
Energy Technology Data Exchange (ETDEWEB)
Kalugin, A. V., E-mail: Kalugin-AV@nrcki.ru; Tebin, V. V. [National Research Centre Kurchatov Institute (Russian Federation)
2016-12-15
The specific features of calculation of the effective multiplication factor using the Monte Carlo method for weakly coupled and non-asymptotic multiplying systems are discussed. Particular examples are considered and practical recommendations on detection and Monte Carlo calculation of systems typical in numerical substantiation of nuclear safety for VVER fuel management problems are given. In particular, the problems of the choice of parameters for the batch mode and the method for normalization of the neutron batch, as well as finding and interpretation of the eigenvalue spectrum for the integral fission matrix, are discussed.
Criticality calculations for homogeneous mixtures of uranium and plutonium
International Nuclear Information System (INIS)
Spiegelberg, R. de S.H.
1981-05-01
Critical parameters were calculated using the one-dimensional multigroup transport theory. Calculations have been performed for water mixture of uranium metal and uranium oxides and plutonium nitrates to determine the dimensions of simple critical geometries (sphere and cylinder). The results of the calculations were plotted showing critical parameters (volume, radius or critical mass). The critical values obtained in Handbuch zur Kritikalitat were used to compare with critical parameters. A sensitivity study for the influences of mesh space size, multigroup structure and order of the S sub(n) approximation on the critical radius was carried out. The GAMTEC-II code was used to generate multigroup cross sections data. Critical radius were calculated using the one-dimensional multigroup transport code DTF-IV. (Author) [pt
International Nuclear Information System (INIS)
Svarny, J.; Mikolas, P.
1999-01-01
The first stage of ATW neutronic benchmark (without an external source), based on the simple modelling of two component concept is presented. The simple model of two component concept of the ATW (graphite + molten salt system) was found. The main purpose of this benchmark is not only to provide the basic characteristics of given ADS but also to test codes in calculations of the rate of transmutation waste and to evaluate basic kinetics parameters and reactivity effects. (author)
Specification of phase 3 benchmark (Hex-Z heterogeneous and burnup calculation)
International Nuclear Information System (INIS)
Kim, Y.I.
2002-01-01
During the second RCM of the IAEA Co-ordinated Research Project Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects the following items were identified as important. Heterogeneity will affect absolute core reactivity. Rod worths could be considerably reduced by heterogeneity effects depending on their detailed design. Heterogeneity effects will affect the resonance self-shielding in the treatment of fuel Doppler, steel Doppler and sodium density effects. However, it was considered more important to concentrate on the sodium density effect in order to reduce the calculational effort required. It was also recognized that burnup effects will have an influence on fuel Doppler and sodium worths. A benchmark for the assessment of heterogeneity effect for Phase 3 was defined. It is to be performed for the Hex-Z model of the reactor only. No calculations will be performed for the R-Z model. For comparison with heterogeneous evaluations, the control rod worth will be calculated at the beginning of the equilibrium cycle, based on the homogeneous model. The definitions of rod raised and rod inserted for SHR are given, using the composition numbers
International Nuclear Information System (INIS)
Key, S.W.
1985-01-01
The results of two calculations related to the impact response of spent nuclear fuel shipping casks are compared to the benchmark results reported in a recent study by the Japan Society of Mechanical Engineers Subcommittee on Structural Analysis of Nuclear Shipping Casks. Two idealized impacts are considered. The first calculation utilizes a right circular cylinder of lead subjected to a 9.0 m free fall onto a rigid target, while the second calculation utilizes a stainless steel clad cylinder of lead subjected to the same impact conditions. For the first problem, four calculations from graphical results presented in the original study have been singled out for comparison with HONDO III. The results from DYNA3D, STEALTH, PISCES, and ABAQUS are reproduced. In the second problem, the results from four separate computer programs in the original study, ABAQUS, ANSYS, MARC, and PISCES, are used and compared with HONDO III. The current version of HONDO III contains a fully automated implementation of the explicit-explicit partitioning procedure for the central difference method time integration which results in a reduction of computational effort by a factor in excess of 5. The results reported here further support the conclusion of the original study that the explicit time integration schemes with automated time incrementation are effective and efficient techniques for computing the transient dynamic response of nuclear fuel shipping casks subject to impact loading. (orig.)
Benchmarking quantum mechanical calculations with experimental NMR chemical shifts of 2-HADNT
Liu, Yuemin; Junk, Thomas; Liu, Yucheng; Tzeng, Nianfeng; Perkins, Richard
2015-04-01
In this study, both GIAO-DFT and GIAO-MP2 calculations of nuclear magnetic resonance (NMR) spectra were benchmarked with experimental chemical shifts. The experimental chemical shifts were determined experimentally for carbon-13 (C-13) of seven carbon atoms for the TNT degradation product 2-hydroxylamino-4,6-dinitrotoluene (2-HADNT). Quantum mechanics GIAO calculations were implemented using Becke-3-Lee-Yang-Parr (B3LYP) and other six hybrid DFT methods (Becke-1-Lee-Yang-Parr (B1LYP), Becke-half-and-half-Lee-Yang-Parr (BH and HLYP), Cohen-Handy-3-Lee-Yang-Parr (O3LYP), Coulomb-attenuating-B3LYP (CAM-B3LYP), modified-Perdew-Wang-91-Lee-Yang-Parr (mPW1LYP), and Xu-3-Lee-Yang-Parr (X3LYP)) which use the same correlation functional LYP. Calculation results showed that the GIAO-MP2 method gives the most accurate chemical shift values, and O3LYP method provides the best prediction of chemical shifts among the B3LYP and other five DFT methods. Three types of atomic partial charges, Mulliken (MK), electrostatic potential (ESP), and natural bond orbital (NBO), were also calculated using MP2/aug-cc-pVDZ method. A reasonable correlation was discovered between NBO partial charges and experimental chemical shifts of carbon-13 (C-13).
International Nuclear Information System (INIS)
Akie, Hiroshi; Ishiguro, Yukio; Takano, Hideki
1988-10-01
The results of the NEACRP HCLWR cell burnup benchmark calculations are summarized in this report. Fifteen organizations from eight countries participated in this benchmark and submitted twenty solutions. Large differences are still observed among the calculated values of void reactivities and conversion ratios. These differences are mainly caused from the discrepancies in the reaction rates of U-238, Pu-239 and fission products. The physics problems related to these results are briefly investigated in the report. In the specialists' meeting on this benchmark calculations held in April 1988, it was recommended to perform continuous energy Monte Carlo calculations in order to obtain reference solutions for design codes. The conclusions resulted from the specialists' meeting are also presented. (author)
Calculation of Upper Subcritical Limits for Nuclear Criticality in a Repository
International Nuclear Information System (INIS)
J.W. Pegram
1998-01-01
The purpose of this document is to present the methodology to be used for development of the Subcritical Limit (SL) for post closure conditions for the Yucca Mountain repository. The SL is a value based on a set of benchmark criticality multiplier, k eff results that are outputs of the MCNP calculation method. This SL accounts for calculational biases and associated uncertainties resulting from the use of MCNP as the method of assessing k eff . The context for an SL estimate include the range of applicability (based on the set of MCNP results) and the type of SL required for the application at hand. This document will include illustrative calculations for each of three approaches. The data sets used for the example calculations are identified in Section 5.1. These represent three waste categories, and SLs for each of these sets of experiments will be computed in this document. Future MCNP data sets will be analyzed using the methods discussed here. The treatment of the biases evaluated on sets of k eff results via MCNP is statistical in nature. This document does not address additional non-statistical contributions to the bias margin, acknowledging that regulatory requirements may impose additional administrative penalties. Potentially, there are other biases or margins that should be accounted for when assessing criticality (k eff ). Only aspects of the bias as determined using the stated assumptions and benchmark critical data sets will be included in the methods and sample calculations in this document. The set of benchmark experiments used in the validation of the computational system should be representative of the composition, configuration, and nuclear characteristics for the application at hand. In this work, a range of critical experiments will be the basis of establishing the SL for three categories of waste types that will be in the repository. The ultimate purpose of this document is to present methods that will effectively characterize the MCNP
The University of Pisa calculations for the Phase I of the OECD/NEA UAM Benchmark
International Nuclear Information System (INIS)
Ball, M.; Parisi, C.; D'Auria, F.
2009-01-01
In this paper we present the Univ. of Pisa preliminary results for the first exercise of the Phase I of the OECD/NEA Benchmark on the Uncertainty in Analysis and Modeling. The scope of exercise one is to address the uncertainties due to the basic nuclear data as well as the impact of processing the nuclear and covariance data, selection of multi-group structure and self-shielding treatment. DRAGON code and TSUNAMI code were employed, using the available covariance data matrix. The execution of DRAGON calculations required the use of ANGELO and LAMBDA codes for the extension of the covariance matrix from the original SCALE 44 group structure to DRAGON 69 group structure. The uncertainties for the main cross sections were evaluated and are presented here. (authors)
Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter; Koslicki, David; Janssen, Stefan; Dröge, Johannes; Gregor, Ivan; Majda, Stephan; Fiedler, Jessika; Dahms, Eik; Bremges, Andreas; Fritz, Adrian; Garrido-Oter, Ruben; Jørgensen, Tue Sparholt; Shapiro, Nicole; Blood, Philip D.; Gurevich, Alexey; Bai, Yang; Turaev, Dmitrij; DeMaere, Matthew Z.; Chikhi, Rayan; Nagarajan, Niranjan; Quince, Christopher; Meyer, Fernando; Balvočiūtė, Monika; Hansen, Lars Hestbjerg; Sørensen, Søren J.; Chia, Burton K. H.; Denis, Bertrand; Froula, Jeff L.; Wang, Zhong; Egan, Robert; Kang, Dongwan Don; Cook, Jeffrey J.; Deltel, Charles; Beckstette, Michael; Lemaitre, Claire; Peterlongo, Pierre; Rizk, Guillaume; Lavenier, Dominique; Wu, Yu-Wei; Singer, Steven W.; Jain, Chirag; Strous, Marc; Klingenberg, Heiner; Meinicke, Peter; Barton, Michael; Lingner, Thomas; Lin, Hsin-Hung; Liao, Yu-Chieh; Silva, Genivaldo Gueiros Z.; Cuevas, Daniel A.; Edwards, Robert A.; Saha, Surya; Piro, Vitor C.; Renard, Bernhard Y.; Pop, Mihai; Klenk, Hans-Peter; Göker, Markus; Kyrpides, Nikos C.; Woyke, Tanja; Vorholt, Julia A.; Schulze-Lefert, Paul; Rubin, Edward M.; Darling, Aaron E.; Rattei, Thomas; McHardy, Alice C.
2018-01-01
In metagenome analysis, computational methods for assembly, taxonomic profiling and binning are key components facilitating downstream biological data interpretation. However, a lack of consensus about benchmarking datasets and evaluation metrics complicates proper performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on datasets of unprecedented complexity and realism. Benchmark metagenomes were generated from ~700 newly sequenced microorganisms and ~600 novel viruses and plasmids, including genomes with varying degrees of relatedness to each other and to publicly available ones and representing common experimental setups. Across all datasets, assembly and genome binning programs performed well for species represented by individual genomes, while performance was substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below the family level. Parameter settings substantially impacted performances, underscoring the importance of program reproducibility. While highlighting current challenges in computational metagenomics, the CAMI results provide a roadmap for software selection to answer specific research questions. PMID:28967888
Evaluation of AMPX-KENO benchmark calculations for high-density spent fuel storage racks
International Nuclear Information System (INIS)
Turner, S.E.; Gurley, M.K.
1981-01-01
The AMPX-KENO computer code package is commonly used to evaluate criticality in high-density spent fuel storage rack designs. Consequently, it is important to know the reliability that can be placed on such calculations and whether or not the results are conservative. This paper evaluates a series of AMPX-KENO calculations which have been made on selected critical experiments. The results are compared with similar analyses reported in the literature by the Oak Ridge National Laboratory and BandW. 8 refs
Dose Rate Experiment at JET for Benchmarking the Calculation Direct One Step Method
International Nuclear Information System (INIS)
Angelone, M.; Petrizzi, L.; Pillon, M.; Villari, R.; Popovichev, S.
2006-01-01
Neutrons produced by D-D and D-T plasmas induce the activation of tokamak materials and of components. The development of reliable methods to assess dose rates is a key issue for maintenance and operating nuclear machines, in normal and off-normal conditions. In the frame of the EFDA Fusion Technology work programme, a computational tool based upon MCNP Monte Carlo code has been developed to predict the dose rate after shutdown: it is called Direct One Step Method (D1S). The D1S is an innovative approach in which the decay gammas are coupled to the neutrons as in the prompt case and they are transported in one single step in the same run. Benchmarking of this new tool with experimental data taken in a complex geometry like that of a tokamak is a fundamental step to test the reliability of the D1S method. A dedicated benchmark experiment was proposed for the 2005-2006 experimental campaign of JET. Two irradiation positions have been selected for the benchmark: one inner position inside the vessel, not far from the plasma, called the 2 upper irradiation end (IE2), where neutron fluence is relatively high. The second position is just outside a vertical port in an external position (EX). Here the neutron flux is lower and the dose rate to be measured is not very far from the residual background. Passive detectors are used for in-vessel measurements: the high sensitivity Thermo Luminescent Dosimeters (TLDs) GR-200A (natural LiF), which ensure measurements down to environmental dose level. An active detector of Geiger-Muller (GM) type is used for out of vessel dose rate measurement. Before their use the detectors were calibrated in a secondary gamma-ray standard (Cs-137 and Co-60) facility in term of air-kerma. The background measurement was carried-out in the period July -September 2005 in the outside position EX using the GM tube and in September 2005 inside the vacuum vessel using TLD detectors located in the 2 Upper irradiation end IE2. In the present work
Parametric Criticality Safety Calculations for Arrays of TRU Waste Containers
Energy Technology Data Exchange (ETDEWEB)
Gough, Sean T. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-10-26
The Nuclear Criticality Safety Division (NCSD) has performed criticality safety calculations for finite and infinite arrays of transuranic (TRU) waste containers. The results of these analyses may be applied in any technical area onsite (e.g., TA-54, TA-55, etc.), as long as the assumptions herein are met. These calculations are designed to update the existing reference calculations for waste arrays documented in Reference 1, in order to meet current guidance on calculational methodology.
Validation of the EIR LWR calculation methods for criticality assessment of storage pools
International Nuclear Information System (INIS)
Grimm, P.; Paratte, J.M.
1986-11-01
The EIR code system for the calculation of light water reactors is presented and the methods used are briefly described. The application of the system to various types of critical experiments and benchmark problems proves its good accuracy, even for heterogeneous configurations containing strong neutron absorbers such as Boral. Since the multiplication factor k eff is normally somewhat overpredicted and the spread of the results is small, this code system is validated for the calculation of storage pools, taking into account a safety margins of 1.5% on k eff . (author)
International Nuclear Information System (INIS)
Semenov, Mikhail
2002-11-01
This report is continuation of studying of the experiments performed on BFS-62-3A critical assembly in Russia. The objective of work is definition of the cross section uncertainties on reactor neutronics parameters as applied to the hybrid core of the BN-600 reactor of Beloyarskaya NPP. Two-dimensional benchmark model of BFS-62-3A was created specially for these purposes and experimental values were reduced to it. Benchmark characteristics for this assembly are 1) criticality; 2) central fission rate ratios (spectral indices); and 3) fission rate distributions in stainless steel reflector. The effects of nuclear data libraries have been studied by comparing the results calculated using available modern data libraries - ENDF/B-V, ENDF/B-VI, ENDF/B-VI-PT, JENDL-3.2 and ABBN-93. All results were computed by Monte Carlo method with the continuous energy cross-sections. The checking of the cross sections of major isotopes on wide benchmark criticality collection was made. It was shown that ENDF/B-V data underestimate the criticality of fast reactor systems up to 2% Δk. As for the rest data, the difference between each other in criticality for BFS-62-3A is around 0.6% Δk. However, taking into account the results obtained for other fast reactor benchmarks (and steel-reflected also), it may conclude that the difference in criticality calculation results can achieve 1% Δk. This value is in a good agreement with cross section uncertainty evaluated for BN-600 hybrid core (±0.6% Δk). This work is related to the JNC-IPPE Collaboration on Experimental Investigation of Excess Weapons Grade Pu Disposition in BN-600 Reactor Using BFS-2 Facility. (author)
Comparisons of the MCNP criticality benchmark suite with ENDF/B-VI.8, JENDL-3.3, and JEFF-3.0
International Nuclear Information System (INIS)
Kim, Do Heon; Gil, Choong-Sup; Kim, Jung-Do; Chang, Jonghwa
2003-01-01
A comparative study has been performed with the latest evaluated nuclear data libraries ENDF/B-VI.8, JENDL-3.3, and JEFF-3.0. The study has been conducted through the benchmark calculations for 91 criticality problems with the libraries processed for MCNP4C. The calculation results have been compared with those of the ENDF60 library. The self-shielding effects of the unresolved-resonance (UR) probability tables have also been estimated for each library. The χ 2 differences between the MCNP results and experimental data were calculated for the libraries. (author)
SIMCRI: a simple computer code for calculating nuclear criticality parameters
International Nuclear Information System (INIS)
Nakamaru, Shou-ichi; Sugawara, Nobuhiko; Naito, Yoshitaka; Katakura, Jun-ichi; Okuno, Hiroshi.
1986-03-01
This is a user's manual for a simple criticality calculation code SIMCRI. The code has been developed to facilitate criticality calculation on a single unit of nuclear fuel. SIMCRI makes an extensive survey with a little computing time. Cross section library MGCL for SIMCRI is the same one for the Monte Carlo criticality code KENOIV; it is, therefore, easy to compare the results of the two codes. SIMCRI solves eigenvalue problems and fixed source problems based on the one space point B 1 equation. The results include infinite and effective multiplication factor, critical buckling, migration area, diffusion coefficient and so on. SIMCRI is comprised in the criticality safety evaluation code system JACS. (author)
Benchmark calculations on fluid coupled co-axial cylinders typical to LMFBR structures
International Nuclear Information System (INIS)
Dostal, M.; Descleve, P.; Gantenbein, F.; Lazzeri, L.
1983-01-01
This paper describes a joint effort promoted and funded by the Commission of European Community under the umbrella of Fast Reactor Co-ordinating Committee and working group on Codes and Standards No. 2 with the purpose to test several programs currently used for dynamic analysis of fluid-coupled structures. The scope of the benchmark calculations is limited to beam type modes of vibration, small displacement of the structures and small pressure variation such as encountered in seismic or flow induced vibration problems. Five computer codes were used: ANSYS, AQUAMODE, NOVAX, MIAS/SAP4 and ZERO where each program employs a different structural-fluid formulation. The calculations were performed for four different geometrical configurations of concentric cylinders where the effect of gap size, water level, and support conditions were considered. The analytical work was accompanied by experiments carried out on a purpose-built rig. The test rig consisted of two concentric cylinders independently supported on flexible cantilevers. A geometrical simplicity and attention in the rig design to eliminate the structural coupling between the cylinders lead to unambiguous test results. Only the beam natural frequencies, in phase and out of phase were measured. The comparison of different analytical methods and experimental results is presented and discussed. The degree of agreement varied between very good and unacceptable. (orig./GL)
Benchmark calculations on residue production within the EURISOL DS project. Part 1: thin targets
Energy Technology Data Exchange (ETDEWEB)
David, J.C.; Blideanu, V.; Boudard, A.; Dore, D.; Leray, S.; Rapp, B.; Ridikas, D.; Thiolliere, N
2006-12-15
We have begun this benchmark study using mass distribution data of reaction products obtained at GSI in inverse kinematics. This step has allowed us to make a first selection among 10 spallation models; in this way the first assessment of the quality of the models was obtained. Then, in a second part, experimental mass distributions for some elements, which either are interesting as radioactive ion beams or important due to the safety and radioprotection issues (alpha or gamma emitters), will be also compared to model calculations. These data have been obtained for an equivalent 0.8 or 1.0 GeV proton beam, which is approximately the proposed projectile energy. We note that in realistic thick targets the proton beam will be slowed down and some secondary particles will be produced. Therefore, the residual nuclei production at lower energies is also important. For this reason, we also performed in the third part of this work some excitation function calculations and the associated data obtained with gamma-spectroscopy to test the models in a wide projectile energy range. We conclude that INCL4/Abla and Isabel/Abla are the best model combinations which we recommend. We also note that the agreement between model and data are better with 1 GeV protons than with 100-200 MeV protons.
International Nuclear Information System (INIS)
Gallmeier, F.X.; Glasgow, D.C.; Jerde, E.A.; Johnson, J.O.; Yugo, J.J.
1999-01-01
The Spallation Neutron Source (SNS) will provide an intense source of low-energy neutrons for experimental use. The low-energy neutrons are produced by the interaction of a high-energy (1.0 GeV) proton beam on a mercury (Hg) target and slowed down in liquid hydrogen or light water moderators. Computer codes and computational techniques are being benchmarked against relevant experimental data to validate and verify the tools being used to predict the performance of the SNS. The LAHET Code System (LCS), which includes LAHET, HTAPE ad HMCNP (a modified version of MCNP version 3b), have been applied to the analysis of experiments that were conducted in the Alternating Gradient Synchrotron (AGS) facility at Brookhaven National Laboratory (BNL). In the AGS experiments, foils of various materials were placed around a mercury-filled stainless steel cylinder, which was bombarded with protons at 1.6 GeV. Neutrons created in the mercury target, activated the foils. Activities of the relevant isotopes were accurately measured and compared with calculated predictions. Measurements at BNL were provided in part by collaborating scientists from JAERI as part of the AGS Spallation Target Experiment (ASTE) collaboration. To date, calculations have shown good agreement with measurements
Application of MCNP in the criticality calculation for reactors
International Nuclear Information System (INIS)
Zhong Zhaopeng; Shi Gong; Hu Yongming
2003-01-01
The criticality calculation is carried out with 3-D Monte Carlo code (MCNP). The author focuses on the introduction of modelling of the core and reflector. The core description is simplified by using repetition structure function of MCNP. k eff in different control rods positions are calculated for the case of JRR3, and the results is consistent with that of the reference. This work shows that MCNP is applicable for reactor criticality calculation
International Nuclear Information System (INIS)
Amin, E.; Hathout, A.M.; Shouman, S.
1997-01-01
The kyoto university reactor physics experiments on the university critical assembly is used to benchmark validate the NCNSRC calculations methodology. This methodology has two lines, diffusion and Monte Carlo. The diffusion line includes the codes WIMSD4 for cell calculations and the two dimensional diffusion code DIXY2 for core calculations. The transport line uses the MULTIKENO-Code vax Version. Analysis is performed for the criticality, and the temperature coefficients of reactivity (TCR) for the light water moderated and reflected cores, of the different cores utilized in the experiments. The results of both Eigen value and TCR approximately reproduced the experimental and theoretical Kyoto results. However, some conclusions are drawn about the adequacy of the standard wimsd4 library. This paper is an extension of the NCNSRC efforts to assess and validate computer tools and methods for both Et-R R-1 and Et-MMpr-2 research reactors. 7 figs., 1 tab
Attila calculations for the 3-D C5G7 benchmark extension
International Nuclear Information System (INIS)
Wareing, T.A.; McGhee, J.M.; Barnett, D.A.; Failla, G.A.
2005-01-01
The performance of the Attila radiation transport software was evaluated for the 3-D C5G7 MOX benchmark extension, a follow-on study to the MOX benchmark developed by the 'OECD/NEA Expert Group on 3-D Radiation Transport Benchmarks'. These benchmarks were designed to test the ability of modern deterministic transport methods to model reactor problems without spatial homogenization. Attila is a general purpose radiation transport software package with an integrated graphical user interface (GUI) for analysis, set-up and postprocessing. Attila provides solutions to the discrete-ordinates form of the linear Boltzmann transport equation on a fully unstructured, tetrahedral mesh using linear discontinuous finite-element spatial differencing in conjunction with diffusion synthetic acceleration of inner iterations. The results obtained indicate that Attila can accurately solve the benchmark problem without spatial homogenization. (authors)
RELAP5/MOD2 benchmarking study: Critical heat flux under low-flow conditions
International Nuclear Information System (INIS)
Ruggles, E.; Williams, P.T.
1990-01-01
Experimental studies by Mishima and Ishii performed at Argonne National Laboratory and subsequent experimental studies performed by Mishima and Nishihara have investigated the critical heat flux (CHF) for low-pressure low-mass flux situations where low-quality burnout may occur. These flow situations are relevant to long-term decay heat removal after a loss of forced flow. The transition from burnout at high quality to burnout at low quality causes very low burnout heat flux values. Mishima and Ishii postulated a model for the low-quality burnout based on flow regime transition from churn turbulent to annular flow. This model was validated by both flow visualization and burnout measurements. Griffith et al. also studied CHF in low mass flux, low-pressure situations and correlated data for upflows, counter-current flows, and downflows with the local fluid conditions. A RELAP5/MOD2 CHF benchmarking study was carried out investigating the performance of the code for low-flow conditions. Data from the experimental study by Mishima and Ishii were the basis for the benchmark comparisons
International Nuclear Information System (INIS)
Caldeira, A.D.
1987-05-01
The theoretical and adjusted Watt spectrum representations for 235 U are used as weighting functions to calculate K eff and θ f 28 /θ f 25 for the benchmark Godiva. The results obtained show that the values of K eff and θ f 28 /θ f 25 are not affected by spectrum form change. (author) [pt
International Nuclear Information System (INIS)
Chen Fubing; Dong Yujie; Zheng Yanhua; Shi Lei; Zhang Zuoyi
2009-01-01
Within the framework of a Coordinated Research Project on Evaluation of High Temperature Gas-Cooled Reactor Performance (CRP-5) initiated by the International Atomic Energy Agency (IAEA), the calculation of steady-state temperature distribution of the 10 MW High Temperature Gas-Cooled Reactor-Test Module (HTR-10) under its initial full power experimental operation has been defined as one of the benchmark problems. This paper gives the investigation results obtained by different countries who participate in solving this benchmark problem. The validation works of the THERMIX code used by the Institute of Nuclear and New Energy Technology (INET) are also presented. For the benchmark items defined in this CRP, various calculation results correspond well with each other and basically agree the experimental results. Discrepancies existing among various code results are preliminarily attributed to different methods, models, material properties, and so on used in the computations. Temperatures calculated by THERMIX for the measuring points in the reactor internals agree well with the experimental values. The maximum fuel center temperatures calculated by the participants are much lower than the limited value of 1,230degC. According to the comparison results of code-to-code as well as code-to-experiment, THERMIX is considered to reproduce relatively satisfactory results for the CRP-5 benchmark problem. (author)
Electron-helium S-wave model benchmark calculations. I. Single ionization and single excitation
Bartlett, Philip L.; Stelbovics, Andris T.
2010-02-01
A full four-body implementation of the propagating exterior complex scaling (PECS) method [J. Phys. B 37, L69 (2004)] is developed and applied to the electron-impact of helium in an S-wave model. Time-independent solutions to the Schrödinger equation are found numerically in coordinate space over a wide range of energies and used to evaluate total and differential cross sections for a complete set of three- and four-body processes with benchmark precision. With this model we demonstrate the suitability of the PECS method for the complete solution of the full electron-helium system. Here we detail the theoretical and computational development of the four-body PECS method and present results for three-body channels: single excitation and single ionization. Four-body cross sections are presented in the sequel to this article [Phys. Rev. A 81, 022716 (2010)]. The calculations reveal structure in the total and energy-differential single-ionization cross sections for excited-state targets that is due to interference from autoionization channels and is evident over a wide range of incident electron energies.
Quantum computing applied to calculations of molecular energies: CH2 benchmark.
Veis, Libor; Pittner, Jiří
2010-11-21
Quantum computers are appealing for their ability to solve some tasks much faster than their classical counterparts. It was shown in [Aspuru-Guzik et al., Science 309, 1704 (2005)] that they, if available, would be able to perform the full configuration interaction (FCI) energy calculations with a polynomial scaling. This is in contrast to conventional computers where FCI scales exponentially. We have developed a code for simulation of quantum computers and implemented our version of the quantum FCI algorithm. We provide a detailed description of this algorithm and the results of the assessment of its performance on the four lowest lying electronic states of CH(2) molecule. This molecule was chosen as a benchmark, since its two lowest lying (1)A(1) states exhibit a multireference character at the equilibrium geometry. It has been shown that with a suitably chosen initial state of the quantum register, one is able to achieve the probability amplification regime of the iterative phase estimation algorithm even in this case.
International Nuclear Information System (INIS)
Suyama, K.; Uchida, Y.; Kashima, T.; Ito, T.; Miyaji, T.
2016-01-01
Criticality control of damaged nuclear fuel is one of the key issues in the decommissioning operation of the Fukushima Daiichi Nuclear Power Station accident. The average isotopic composition of spent nuclear fuel as a function of burn-up is required in order to evaluate criticality parameters of the mixture of damaged nuclear fuel with other materials. The NEA Expert Group on Burn-up Credit Criticality (EGBUC) has organised several international benchmarks to assess the accuracy of burn-up calculation methodologies. For BWR fuel, the Phase III-B benchmark, published in 2002, was a remarkable landmark that provided general information on the burn-up properties of BWR spent fuel based on the 8x8 type fuel assembly. Since the publication of the Phase III-B benchmark, all major nuclear data libraries have been revised; in Japan from JENDL-3.2 to JENDL-4, in Europe from JEF-2.2 to JEFF-3.1 and in the US from ENDF/B-VI to ENDF/B-VII.1. Burn-up calculation methodologies have been improved by adopting continuous-energy Monte Carlo codes and modern neutronics calculation methods. Considering the importance of the criticality control of damaged fuel in the Fukushima Daiichi Nuclear Power Station accident, a new international burn-up calculation benchmark for the 9 x 9 STEP-3 BWR fuel assemblies was organised to carry out the inter-comparison of the averaged isotopic composition in the interest of the burnup credit criticality safety community. Benchmark specifications were proposed and approved at the EGBUC meeting in September 2012 and distributed in October 2012. The deadline for submitting results was set at the end of February 2013. The basic model for the benchmark problem is an infinite two-dimensional array of BWR fuel assemblies consisting of a 9 x 9 fuel rod array with a water channel in the centre. The initial uranium enrichment of fuel rods without gadolinium is 4.9, 4.4, 3.9, 3.4 and 2.1 wt% and 3.4 wt% for the rods using gadolinium. The burn-up conditions are
Energy Technology Data Exchange (ETDEWEB)
Kim, Hyoung Tae; Park, Joo Hwan; Rhee, Bo Wook
2006-07-15
To justify the use of a commercial Computational Fluid Dynamics (CFD) code for a CANDU fuel channel analysis, especially for the radiation heat transfer dominant conditions, the CFX-10 code is tested against three benchmark problems which were used for the validation of a radiation heat transfer in the CANDU analysis code, a CATHENA. These three benchmark problems are representative of the CANDU fuel channel configurations from a simple geometry to whole fuel channel geometry. With assumptions of a non-participating medium completely enclosed with the diffuse, gray and opaque surfaces, the solutions of the benchmark problems are obtained by the concept of surface resistance to radiation accounting for the view factors and the emissivities. The view factors are calculated by the program MATRIX version 1.0 avoiding the difficulty of hand calculation for the complex geometries. For the solutions of the benchmark problems, the temperature or the net radiation heat flux boundary conditions are prescribed for each radiating surface to determine the radiation heat transfer rate or the surface temperature, respectively by using the network method. The Discrete Transfer Model (DTM) is used for the CFX-10 radiation model and its calculation results are compared with the solutions of the benchmark problems. The CFX-10 results for the three benchmark problems are in close agreement with these solutions, so it is concluded that the CFX-10 with a DTM radiation model can be applied to the CANDU fuel channel analysis where a surface radiation heat transfer is a dominant mode of the heat transfer.
International Nuclear Information System (INIS)
Kim, Hyoung Tae; Park, Joo Hwan; Rhee, Bo Wook
2006-07-01
To justify the use of a commercial Computational Fluid Dynamics (CFD) code for a CANDU fuel channel analysis, especially for the radiation heat transfer dominant conditions, the CFX-10 code is tested against three benchmark problems which were used for the validation of a radiation heat transfer in the CANDU analysis code, a CATHENA. These three benchmark problems are representative of the CANDU fuel channel configurations from a simple geometry to whole fuel channel geometry. With assumptions of a non-participating medium completely enclosed with the diffuse, gray and opaque surfaces, the solutions of the benchmark problems are obtained by the concept of surface resistance to radiation accounting for the view factors and the emissivities. The view factors are calculated by the program MATRIX version 1.0 avoiding the difficulty of hand calculation for the complex geometries. For the solutions of the benchmark problems, the temperature or the net radiation heat flux boundary conditions are prescribed for each radiating surface to determine the radiation heat transfer rate or the surface temperature, respectively by using the network method. The Discrete Transfer Model (DTM) is used for the CFX-10 radiation model and its calculation results are compared with the solutions of the benchmark problems. The CFX-10 results for the three benchmark problems are in close agreement with these solutions, so it is concluded that the CFX-10 with a DTM radiation model can be applied to the CANDU fuel channel analysis where a surface radiation heat transfer is a dominant mode of the heat transfer
Verification of HELIOS-MASTER system through benchmark of critical experiments
International Nuclear Information System (INIS)
Kim, H. Y.; Kim, K. Y.; Cho, B. O.; Lee, C. C.; Zee, S. O.
1999-01-01
The HELIOS-MASTER code system is verified through the benchmark of the critical experiments that were performed by RRC 'Kurchatov Institute' with water-moderated hexagonally pitched lattices of highly enriched Uranium fuel rods (80w/o). We also used the same input by using the MCNP code that was described in the evaluation report, and compared our results with those of the evaluation report. HELIOS, developed by Scandpower A/S, is a two-dimensional transport program for the generation of group cross-sections, and MASTER, developed by KAERI, is a three-dimensional nuclear design and analysis code based on the two-group diffusion theory. It solves neutronics model with the AFEN (Analytic Function Expansion Nodal) method for hexagonal geometry. The results show that the HELIOS-MASTER code system is fast and accurate enough to be used as nuclear core analysis tool for hexagonal geometry
Calculation of the minimum critical mass of fissile nuclides
International Nuclear Information System (INIS)
Wright, R.Q.; Hopper, Calvin Mitchell
2008-01-01
The OB-1 method for the calculation of the minimum critical mass of fissile actinides in metal/water systems was described in a previous paper. A fit to the calculated minimum critical mass data using the extended criticality parameter is the basis of the revised method. The solution density (grams/liter) for the minimum critical mass is also obtained by a fit to calculated values. Input to the calculation consists of the Maxwellian averaged fission and absorption cross sections and the thermal values of nubar. The revised method gives more accurate values than the original method does for both the minimum critical mass and the solution densities. The OB-1 method has been extended to calculate the uncertainties in the minimum critical mass for 12 different fissile nuclides. The uncertainties for the fission and capture cross sections and the estimated nubar uncertainties are used to determine the uncertainties in the minimum critical mass, either in percent or grams. Results have been obtained for U-233, U-235, Pu-236, Pu-239, Pu-241, Am-242m, Cm-243, Cm-245, Cf-249, Cf-251, Cf-253, and Es-254. Eight of these 12 nuclides are included in the ANS-8.15 standard.
International Nuclear Information System (INIS)
Aydogan, F.; Hochreiter, L.; Ivanov, K.; Martin, M.; Utsuno, H.; Sartori, E.
2010-01-01
This report provides the specification for the uncertainty exercises of the international OECD/NEA, NRC and NUPEC BFBT benchmark problem including the elemental task. The specification was prepared jointly by Pennsylvania State University (PSU), USA and the Japan Nuclear Energy Safety (JNES) Organisation, in cooperation with the OECD/NEA and the Commissariat a l'energie atomique (CEA Saclay, France). The work is sponsored by the US NRC, METI-Japan, the OECD/NEA and the Nuclear Engineering Program (NEP) of Pennsylvania State University. This uncertainty specification covers the fourth exercise of Phase I (Exercise-I-4), and the third exercise of Phase II (Exercise II-3) as well as the elemental task. The OECD/NRC BFBT benchmark provides a very good opportunity to apply uncertainty analysis (UA) and sensitivity analysis (SA) techniques and to assess the accuracy of thermal-hydraulic models for two-phase flows in rod bundles. During the previous OECD benchmarks, participants usually carried out sensitivity analysis on their models for the specification (initial conditions, boundary conditions, etc.) to identify the most sensitive models or/and to improve the computed results. The comprehensive BFBT experimental database (NEA, 2006) leads us one step further in investigating modelling capabilities by taking into account the uncertainty analysis in the benchmark. The uncertainties in input data (boundary conditions) and geometry (provided in the benchmark specification) as well as the uncertainties in code models can be accounted for to produce results with calculational uncertainties and compare them with the measurement uncertainties. Therefore, uncertainty analysis exercises were defined for the void distribution and critical power phases of the BFBT benchmark. This specification is intended to provide definitions related to UA/SA methods, sensitivity/ uncertainty parameters, suggested probability distribution functions (PDF) of sensitivity parameters, and selected
Development of M3C code for Monte Carlo reactor physics criticality calculations
International Nuclear Information System (INIS)
Kumar, Anek; Kannan, Umasankari; Krishanani, P.D.
2015-06-01
The development of Monte Carlo code (M3C) for reactor design entails use of continuous energy nuclear data and Monte Carlo simulations for each of the neutron interaction processes. BARC has started a concentrated effort for developing a new general geometry continuous energy Monte Carlo code for reactor physics calculation indigenously. The code development required a comprehensive understanding of the basic continuous energy cross section sets. The important features of this code are treatment of heterogeneous lattices by general geometry, use of point cross sections along with unionized energy grid approach, thermal scattering model for low energy treatment, capability of handling the microscopic fuel particles dispersed randomly. The capability of handling the randomly dispersed microscopic fuel particles which is very useful for the modeling of High-Temperature Gas-Cooled reactor fuels which are composed of thousands of microscopic fuel particle (TRISO fuel particle), randomly dispersed in a graphite matrix. The Monte Carlo code for criticality calculation is a pioneering effort and has been used to study several types of lattices including cluster geometries. The code has been verified for its accuracy against more than 60 sample problems covering a wide range from simple (like spherical) to complex geometry (like PHWR lattice). Benchmark results show that the code performs quite well for the criticality calculation of the system. In this report, the current status of the code, features of the code, some of the benchmark results for the testing of the code and input preparation etc. are discussed. (author)
A thermo mechanical benchmark calculation of a hexagonal can in the BTI accident with INCA code
International Nuclear Information System (INIS)
Zucchini, A.
1988-01-01
The thermomechanical behaviour of an hexagonal can in a benchmark problem (simulating the conditions of a BTI accident in a fuel assembly) is examined by means of the INCA code and the results systematically compared with those of ADINA
MCNP Perturbation Capability for Monte Carlo Criticality Calculations
International Nuclear Information System (INIS)
Hendricks, J.S.; Carter, L.L.; McKinney, G.W.
1999-01-01
The differential operator perturbation capability in MCNP4B has been extended to automatically calculate perturbation estimates for the track length estimate of k eff in MCNP4B. The additional corrections required in certain cases for MCNP4B are no longer needed. Calculating the effect of small design changes on the criticality of nuclear systems with MCNP is now straightforward
International Nuclear Information System (INIS)
Smolen, G.R.; Funabashi, H.
1987-01-01
Critical experiments have been conducted with organically moderated mixed-oxide (MOX) fuel pin assemblies at the Pacific Northwest Lab. Critical Mass Lab. These experiments are part of a joint exchange program between the US Dept. of Energy and the Power Reactor and Nuclear Fuel Development Corp. of Japan in the area of criticality data development. The purpose of these experiments is to benchmark computer codes and cross-section libraries and to assess the reactivity difference between systems moderated by water and those moderated by an organic solution. Past studies have indicated that some organic mixtures may be better moderators than water. This topic is of particular importance to the criticality safety of fuel processing plants where fissile material is dissolved in organic solutions during the solvent extraction process. In the past, it has been assumed that the codes and libraries benchmarked with water-moderated experiments were adequate when performing design and licensing studies of organically moderated systems. Calculations presented in this paper indicated that the Scale code system and the 27-energy-group cross-section library accurately compute k/sub eff/ for organically moderated MOX fuel pin assemblies. Furthermore, the reactivity of an organic solution with a 32 vol % TBP/68 vol% NPH mixture in a heterogeneous configuration is the same, for practical purposes, as water
International Nuclear Information System (INIS)
Smolen, G.R.
1987-01-01
Critical experiments have been conducted with organic-moderated mixed oxide (MOX) fuel pin assemblies at the Pacific Northwest Laboratory (PNL) Critical Mass Laboratory (CML). These experiments are part of a joint exchange program between the United States Department of Energy (USDOE) and the Power Reactor and Nuclear Fuel Development Corporation (PNC) of Japan in the area of criticality data development. The purpose of these experiments is to benchmark computer codes and cross-section libraries and to assess the reactivity difference between systems moderated by water and those moderated by an organic solution. Past studies have indicated that some organic mixtures may be better moderators than water. This topic is of particular importance to the criticality safety of fuel processing plants where fissile material is dissolved in organic solutions during the solvent extraction process. In the past, it has been assumed that the codes and libraries benchmarked with water-moderated experiments were adequate when performing design and licensing studies of organic-moderated systems. Calculations presented in this paper indicated that the SCALE code system and the 27-energy-group cross-section accurately compute k-effectives for organic moderated MOX fuel-pin assemblies. Furthermore, the reactivity of an organic solution with a 32-vol-% TBP/68-vol-% NPH mixture in a heterogeneous configuration is the same, for practical purposes, as water. 5 refs
International Nuclear Information System (INIS)
Whitesides, G.H.; Stephens, M.E.
1984-01-01
During the past two years, a Working Group established by the Organization for Economic Co-Operation and Development's Nuclear Energy Agency (OECD-NEA) has been developing a set of criticality benchmark problems which could be used to help establish the validity of criticality safety computer programs and their associated nuclear data for calculation of ksub(eff) for spent light water reactor (LWR) fuel transport containers. The basic goal of this effort was to identify a set of actual critical experiments which would contain the various material and geometric properties present in spent LWR transport contrainers. These data, when used by the various computational methods, are intended to demonstrate the ability of each method to accurately reproduce the experimentally measured ksub(eff) for the parameters under consideration
Calculational study for criticality safety data of fissionable actinides
International Nuclear Information System (INIS)
Nojiri, Ichiro; Fukasaku, Yasuhiro.
1997-01-01
This study has been carried out to obtain basic criticality safety characteristics of minor actinides nuclides. Criticality safety data of minor actinides nuclides have been surveyed through public literatures. Critical mass of seven nuclides, Np-237, Am-241, Am-242m, Am-243, Cm-243, Cm-244 and Cm-245, have been calculated by using two code systems of criticality safety analysis, SCALE-4 and MCNP4A, under some material and reflector conditions. Some applicable cross-section libraries have been used for each code systems. Calculated data have been compared with each other and with published data. The results of this comparison shows that there is no discrepancy within the computational codes and the calculated data is strongly depend on the cross-section library. (author)
Validation of calculational methods for nuclear criticality safety - approved 1975
International Nuclear Information System (INIS)
Anon.
1977-01-01
The American National Standard for Nuclear Criticality Safety in Operations with Fissionable Materials Outside Reactors, N16.1-1975, states in 4.2.5: In the absence of directly applicable experimental measurements, the limits may be derived from calculations made by a method shown to be valid by comparison with experimental data, provided sufficient allowances are made for uncertainties in the data and in the calculations. There are many methods of calculation which vary widely in basis and form. Each has its place in the broad spectrum of problems encountered in the nuclear criticality safety field; however, the general procedure to be followed in establishing validity is common to all. The standard states the requirements for establishing the validity and area(s) of applicability of any calculational method used in assessing nuclear criticality safety
Spectral measurements in critical assemblies: MCNP specifications and calculated results
Energy Technology Data Exchange (ETDEWEB)
Stephanie C. Frankle; Judith F. Briesmeister
1999-12-01
Recently, a suite of 86 criticality benchmarks for the Monte Carlo N-Particle (MCNP) transport code was developed, and the results of testing the ENDF/B-V and ENDF/B-VI data (through Release 2) were published. In addition to the standard k{sub eff} measurements, other experimental measurements were performed on a number of these benchmark assemblies. In particular, the Cross Section Evaluation Working Group (CSEWG) specifications contain experimental data for neutron leakage and central-flux measurements, central-fission ratio measurements, and activation ratio measurements. Additionally, there exists another set of fission reaction-rate measurements performed at the National Institute of Standards and Technology (NIST) utilizing a {sup 252}Cf source. This report will describe the leakage and central-flux measurements and show a comparison of experimental data to MCNP simulations performed using the ENDF/B-V and B-VI (Release 2) data libraries. Central-fission and activation reaction-rate measurements will be described, and the comparison of experimental data to MCNP simulations using available data libraries for each reaction of interest will be presented. Finally, the NIST fission reaction-rate measurements will be described. A comparison of MCNP results published previously with the current MCNP simulations will be presented for the NIST measurements, and a comparison of the current MCNP simulations to the experimental measurements will be presented.
Spectral measurements in critical assemblies: MCNP specifications and calculated results
International Nuclear Information System (INIS)
Frankle, Stephanie C.; Briesmeister, Judith F.
1999-01-01
Recently, a suite of 86 criticality benchmarks for the Monte Carlo N-Particle (MCNP) transport code was developed, and the results of testing the ENDF/B-V and ENDF/B-VI data (through Release 2) were published. In addition to the standard k eff measurements, other experimental measurements were performed on a number of these benchmark assemblies. In particular, the Cross Section Evaluation Working Group (CSEWG) specifications contain experimental data for neutron leakage and central-flux measurements, central-fission ratio measurements, and activation ratio measurements. Additionally, there exists another set of fission reaction-rate measurements performed at the National Institute of Standards and Technology (NIST) utilizing a 252 Cf source. This report will describe the leakage and central-flux measurements and show a comparison of experimental data to MCNP simulations performed using the ENDF/B-V and B-VI (Release 2) data libraries. Central-fission and activation reaction-rate measurements will be described, and the comparison of experimental data to MCNP simulations using available data libraries for each reaction of interest will be presented. Finally, the NIST fission reaction-rate measurements will be described. A comparison of MCNP results published previously with the current MCNP simulations will be presented for the NIST measurements, and a comparison of the current MCNP simulations to the experimental measurements will be presented
Criticality benchmark results for the ENDF60 library with MCNP trademark
International Nuclear Information System (INIS)
Keen, N.D.; Frankle, S.C.; MacFarlane, R.E.
1995-01-01
The continuous-energy neutron data library ENDF60, for use with the Monte Carlo N-Particle radiation transport code MCNP4A, was released in the fall of 1994. The ENDF60 library is comprised of 124 nuclide data files based on the ENDF/B-VI (B-VI) evaluations through Release 2. Fifty-two percent of these B-VI evaluations are translations from ENDF/B-V (B-V). The remaining forty-eight percent are new evaluations which have sometimes changed significantly. Among these changes are greatly increased use of isotopic evaluations, more extensive resonance-parameter evaluations, and energy-angle correlated distributions for secondary particles. In particular, the upper energy limit for the resolved resonance region of 235 U, 238 U and 239 Pu has been extended from 0.082, 4.0, and 0.301 keV to 2..25, 10.0, and 2.5 keV respectively. As regulatory oversight has advanced and performing critical experiments has become more difficult, there has been an increased reliance on computational methods. For the criticality safety community, the performance of the combined transport code and data library is of interest. The purpose of this abstract is to provide benchmarking results to aid the user in determining the best data library for their application
The calculational VVER burnup Credit Benchmark No.3 results with the ENDF/B-VI rev.5 (1999)
Energy Technology Data Exchange (ETDEWEB)
Rodriguez Gual, Maritza [Centro de Tecnologia Nuclear, La Habana (Cuba). E-mail: mrgual@ctn.isctn.edu.cu
2000-07-01
The purpose of this papers to present the results of CB3 phase of the VVER calculational benchmark with the recent evaluated nuclear data library ENDF/B-VI Rev.5 (1999). This results are compared with the obtained from the other participants in the calculations (Czech Republic, Finland, Hungary, Slovaquia, Spain and the United Kingdom). The phase (CB3) of the VVER calculation benchmark is similar to the Phase II-A of the OECD/NEA/INSC BUC Working Group benchmark for PWR. The cases without burnup profile (BP) were performed with the WIMS/D-4 code. The rest of the cases have been carried with DOTIII discrete ordinates code. The neutron library used was the ENDF/B-VI rev. 5 (1999). The WIMS/D-4 (69 groups) is used to collapse cross sections from the ENDF/B-VI Rev. 5 (1999) to 36 groups working library for 2-D calculations. This work also comprises the results of CB1 (obtained with ENDF/B-VI rev. 5 (1999), too) and CB3 for cases with Burnup of 30 MWd/TU and cooling time of 1 and 5 years and for case with Burnup of 40 MWd/TU and cooling time of 1 year. (author)
International Nuclear Information System (INIS)
Blum, P.; Cagnon, R.; Nimal, J.C.
1982-01-01
This report gives the results of a campaign of gamma dose rates measurement in the vicinity of a transport package loaded with 12 PWR spent fuel assemblies, so that the characteristics of the package and the fuel. It describes the measuring methods, and gives the accuracy of the data which will be usefull, as benchmarks, to the control of the calculation methods used to verify the gamma shielding of the packages. It shows how to calculate gamma dose rates from the data given on the package and the fuel, and gives the results of a calculation with the Mecure IV code and compares them to the measurements
Energy Technology Data Exchange (ETDEWEB)
Lee, Yi-Kang, E-mail: yi-kang.lee@cea.fr
2016-11-01
Highlights: • Verification and validation of TRIPOLI-4 radiation transport calculations for ITER shielding benchmark. • Evaluation of CEA-V5.1.1 and FENDL-3.0 nuclear data libraries on D–T fusion neutron continuous energy transport calculations. • Advances in nuclear analyses for nuclear heating and radiation damage in iron. • This work also demonstrates that the “safety factors” concept is necessary in the nuclear analyses of ITER. - Abstract: With the growing interest in using the continuous-energy TRIPOLI-4{sup ®} Monte Carlo radiation transport code for ITER applications, a key issue that arises is whether or not the released TRIPOLI-4 code and its associated nuclear data libraries are verified and validated for the D–T fusion neutronics calculations. Previous published benchmark results of TRIPOLI-4 code on the ITER related activities have concentrated on the first wall loading, the reactor dosimetry, the nuclear heating, and the tritium breeding ratio. To enhance the TRIPOLI-4 verification and validation on neutron-gamma coupled calculations for fusion device application, the computational ITER shielding benchmark of M. E. Sawan was performed in this work by using the 2013 released TRIPOLI-4.9S code and the associated CEA-V5.1.1 data library. First wall, blanket, vacuum vessel and toroidal field magnet of the inboard and outboard components were fully modelled in this 1-D toroidal cylindrical benchmark. The 14.1 MeV source neutrons were sampled from a uniform isotropic distribution in the plasma zone. Nuclear responses including neutron and gamma fluxes, nuclear heating, and material damage indicator were benchmarked against previous published results. The capabilities of the TRIPOLI-4 code on the evaluation of above physics parameters were presented. The nuclear data library from the new FENDL-3.0 evaluation was also benchmarked against the CEA-V5.1.1 results for the neutron transport calculations. The results show that both data libraries
International Nuclear Information System (INIS)
Lee, Yi-Kang
2016-01-01
Highlights: • Verification and validation of TRIPOLI-4 radiation transport calculations for ITER shielding benchmark. • Evaluation of CEA-V5.1.1 and FENDL-3.0 nuclear data libraries on D–T fusion neutron continuous energy transport calculations. • Advances in nuclear analyses for nuclear heating and radiation damage in iron. • This work also demonstrates that the “safety factors” concept is necessary in the nuclear analyses of ITER. - Abstract: With the growing interest in using the continuous-energy TRIPOLI-4 ® Monte Carlo radiation transport code for ITER applications, a key issue that arises is whether or not the released TRIPOLI-4 code and its associated nuclear data libraries are verified and validated for the D–T fusion neutronics calculations. Previous published benchmark results of TRIPOLI-4 code on the ITER related activities have concentrated on the first wall loading, the reactor dosimetry, the nuclear heating, and the tritium breeding ratio. To enhance the TRIPOLI-4 verification and validation on neutron-gamma coupled calculations for fusion device application, the computational ITER shielding benchmark of M. E. Sawan was performed in this work by using the 2013 released TRIPOLI-4.9S code and the associated CEA-V5.1.1 data library. First wall, blanket, vacuum vessel and toroidal field magnet of the inboard and outboard components were fully modelled in this 1-D toroidal cylindrical benchmark. The 14.1 MeV source neutrons were sampled from a uniform isotropic distribution in the plasma zone. Nuclear responses including neutron and gamma fluxes, nuclear heating, and material damage indicator were benchmarked against previous published results. The capabilities of the TRIPOLI-4 code on the evaluation of above physics parameters were presented. The nuclear data library from the new FENDL-3.0 evaluation was also benchmarked against the CEA-V5.1.1 results for the neutron transport calculations. The results show that both data libraries can be
DEFF Research Database (Denmark)
Mitzel, Jens; Gülzow, Erich; Kabza, Alexander
2016-01-01
This paper is focused on the identification of critical parameters and on the development of reliable methodologies to achieve comparable benchmark results. Possibilities for control sensor positioning and for parameter variation in sensitivity tests are discussed and recommended options for the ...
Variational Variance Reduction for Monte Carlo Criticality Calculations
International Nuclear Information System (INIS)
Densmore, Jeffery D.; Larsen, Edward W.
2001-01-01
A new variational variance reduction (VVR) method for Monte Carlo criticality calculations was developed. This method employs (a) a variational functional that is more accurate than the standard direct functional, (b) a representation of the deterministically obtained adjoint flux that is especially accurate for optically thick problems with high scattering ratios, and (c) estimates of the forward flux obtained by Monte Carlo. The VVR method requires no nonanalog Monte Carlo biasing, but it may be used in conjunction with Monte Carlo biasing schemes. Some results are presented from a class of criticality calculations involving alternating arrays of fuel and moderator regions
International report to validate criticality safety calculations for fissile material transport
International Nuclear Information System (INIS)
Whitesides, G.E.
1984-01-01
During the past three years a Working Group established by the Organization for Economic Co-operation and Development's Nuclear Energy Agency (OECD-NEA) in Paris, France, has been studying the validity and applicability of a variety of criticality safety computer programs and their associated nuclear data for the computation of the neutron multiplication factor, k/sub eff/, for various transport packages used in the fuel cycle. The principal objective of this work has been to provide an internationally acceptable basis for the licensing authorities in a country to honor licensing approvals granted by other participating countries. Eleven countries participated in the initial study which consisted of examining criticality safety calculations for packages designed for spent light water reactor fuel transport. This paper presents a summary of this study which has been completed and reported in an OECD-NEA Report No. CSNI-71. The basic goal of this study was to outline a satisfactory validation procedure for this particular application. First, a set of actual critical experiments were chosen which contained the various material and geometric properties present in typical LWR transport containers. Secondly, calculations were made by each of the methods in order to determine how accurately each method reproduced the experimental values. This successful effort in developing a benchmark procedure for validating criticality calculations for spent LWR transport packages along with the successful intercomparison of a number of methods should provide increased confidence by licensing authorities in the use of these methods for this area of application. 4 references, 2 figures
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
International Nuclear Information System (INIS)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Review of criticality safety benchmark data of plutonium solution in ICSBEP handbook
International Nuclear Information System (INIS)
Yamamoto, Toshihiro; Miyoshi, Yoshinori; Okubo, Kiyoshi
2003-01-01
The criticality data of plutonium solutions published in the ICSBEP Handbook were reviewed. Criticality data for lower plutonium concentration and higher 240 Pu content, which correspond to a reprocessing process condition, are very scarce and hence the criticality data in this area are desired. While the calculated k eff 's with ENDF/B-V show the dependence of the plutonium concentration, the dependence has been corrected in JENDL-3.3 because of energy distribution of the capture cross section of 239 Pu. Based on the generalized perturbation theory, the sensitivity coefficient of k eff with respect to fission and capture cross section in plutonium solutions were obtained. In a plutonium solution with a lower concentration, cross sections in the thermal energy less than 0.1 eV have significant effects on the criticality. On the other hand, the criticality of higher concentration plutonium solutions is mostly dominated by cross sections in the energy range larger than 0.1 eV. Regarding the effect of 240 Pu on criticality, the capture cross section 240 Pu around the resonance peak near 1 eV is dominant regardless of the concentration. (author)
Acceleration and increased control of convergence in criticality calculations
International Nuclear Information System (INIS)
Jinaphanh, Alexis
2014-01-01
IRSN is developing a numerical simulation code called Moret to assess the nuclear criticality risk. This tool is designed to perform 3D simulations of neutron transport in a given system. It achieves this by adopting a probabilistic approach known as Monte Carlo, in which the transport of several successive generations of neutrons is calculated from an initial neutron distribution in the system under study. These generations are simulated until it is considered that convergence of the effective neutron multiplication coefficient (or K eff ) - which characterizes the gap before reaching the critical state - has been reached. Insufficient convergence can lead to underestimation of both K eff and the criticality risk. During this thesis work, A. Jinaphanh sought to improve the reliability of values by developing a new method for initializing calculations, together with a criterion used to reliably determine whether or not convergence has been reached. (author)
TRIGA criticality experiment for testing burn-up calculations
International Nuclear Information System (INIS)
Persic, Andreja; Ravnik, Matjaz; Zagar, Tomaz
1999-01-01
A criticality experiment with partly burned TRIGA fuel is described. 20 wt % enriched standard TRIGA fuel elements initially containing 12 wt % U are used. Their average burn-up is 1.4 MWd. Fuel element burn-up is calculated in 2-D four group diffusion approximation using TRIGLAV code. The burn-up of several fuel elements is also measured by reactivity method. The excess reactivity of several critical and subcritical core configurations is measured. Two core configurations contain the same fuel elements in the same arrangement as were used in the fresh TRIGA fuel criticality experiment performed in 1991. The results of the experiment may be applied for testing the computer codes used for fuel burn-up calculations. (author)
Presentation and qualification of criticality calculation in fuel element storage
International Nuclear Information System (INIS)
Ermumcu, G.; Gonnord, J.; Monnier, A.; Nimal, J.C.
Faced with the growing size of criticality calculation requests a fast and slightly conservative method has been perfected for evaluating the effective multiplication constant of sites containing PWR type elements. This method is based on the use of the DOT 3.5 code which requires a bidimensional modelisation of the geometry of the problem and the placing into groups of the macroscopic cross sections of the various materials. This preliminary work is effected by various APOLLO calculations. This diagram is qualified by comparison with the results obtained by the Monte Carlo TRIPOLI code. Comparing the values obtained by MORET and APOLLO-DOT for the criticality of transport flask end in good agreement. For the parametric studies, a large number of calculations can be necessary, and analytical methods cost little for simple geometries. This diagram can be used for studying small transport flasks but it is particularly advantageous for storages [fr
International Nuclear Information System (INIS)
Joo, Hyung Kook; Noh, Jae Man; Lee, Hyung Chul; Yoo, Jae Woon
2006-01-01
In this report, we verified the NUREC code transient calculation capability using OECD NEA/US NRC PWR MOX/UO2 Core Transient Benchmark Problem. The benchmark problem consists of Part 1, a 2-D problem with given T/H conditions, Part 2, a 3-D problem at HFP condition, Part 3, a 3-D problem at HZP condition, and Part 4, a transient state initiated by a control rod ejection at HZP condition in Part 3. In Part 1, the results of NUREC code agreed well with the reference solution obtained from DeCART calculation except for the pin power distributions at the rodded assemblies. In Part 2, the results of NUREC code agreed well with the reference DeCART solutions. In Part 3, some results of NUREC code such as critical boron concentration and core averaged delayed neutron fraction agreed well with the reference PARCS 2G solutions. But the error of the assembly power at the core center was quite large. The pin power errors of NUREC code at the rodded assemblies was much smaller the those of PARCS code. The axial power distribution also agreed well with the reference solution. In Part 4, the results of NUREC code agreed well with those of PARCS 2G code which was taken as the reference solution. From the above results we can conclude that the results of NUREC code for steady states and transient states of the MOX loaded LWR core agree well with those of the other codes
International Nuclear Information System (INIS)
Gillete, V.H.; Patino, N.E.; Granada, J.E.; Mayer, R.E.
1988-01-01
Using a synthetic scattering function which describes the interaction of neutrons with molecular gases we provide analytical expressions for zero-and first-order scattering kernels, σ 0 (E 0 →E), σ 1 (E 0 →E), and total cross section σ 0 (E 0 ). Based on these quantities, we have performed calculations of thermalization parameters and transport coefficients for H 2 O, D 2 O, C 6 H 6 and (CH 2 ) n at room temperature. Comparasion of such values with available experimental data and other calculations is satisfactory. We also generated nuclear data libraries for H 2 O with 47 thermal groups at 300K and performed some benchmark calculations ( 235 U, 239 Pu, PWR cell and typical APWR cell); the resulting reactivities are compared with experimental data and ENDF/B-IV calculations. (author) [pt
Analysis of benchmark critical experiments with ENDF/B-VI data sets
International Nuclear Information System (INIS)
Hardy, J. Jr.; Kahler, A.C.
1991-01-01
Several clean critical experiments were analyzed with ENDF/B-VI data to assess the adequacy of the data for U 235 , U 238 and oxygen. These experiments were (1) a set of homogeneous U 235 -H 2 O assemblies spanning a wide range of hydrogen/uranium ratio, and (2) TRX-1, a simple, H 2 O-moderated Bettis lattice of slightly-enriched uranium metal rods. The analyses used the Monte Carlo program RCP01, with explicit three-dimensional geometry and detailed representation of cross sections. For the homogeneous criticals, calculated k crit values for large, thermal assemblies show good agreement with experiment. This supports the evaluated thermal criticality parameters for U 235 . However, for assemblies with smaller H/U ratios, k crit values increase significantly with increasing leakage and flux-spectrum hardness. These trends suggest that leakage is underpredicted and that the resonance eta of the ENDF/B-VI U 235 is too large. For TRX-1, reasonably good agreement is found with measured lattice parameters (reaction-rate ratios). Of primary interest is rho28, the ratio of above-thermal to thermal U 238 capture. Calculated rho28 is 2.3 (± 1.7) % above measurement, suggesting that U 238 resonance capture remains slightly overpredicted with ENDF/B-VI. However, agreement is better than observed with earlier versions of ENDF/B
International Nuclear Information System (INIS)
Davis, I.M.; Palmer, T.S.
2005-01-01
Benchmark calculations are performed for neutron transport in a two material (binary) stochastic multiplying medium. Spatial, angular, and energy dependence are included. The problem considered is based on a fuel assembly of a common pressurized water reactor. The mean chord length through the assembly is determined and used as the planar geometry system length. According to assumed or calculated material distributions, this system length is populated with alternating fuel and moderator segments of random size. Neutron flux distributions are numerically computed using a discretized form of the Boltzmann transport equation employing diffusion synthetic acceleration. Average quantities (group fluxes and k-eigenvalue) and variances are calculated from an ensemble of realizations of the mixing statistics. The effects of varying two parameters in the fuel, two different boundary conditions, and three different sets of mixing statistics are assessed. A probability distribution function (PDF) of the k-eigenvalue is generated and compared with previous research. Atomic mix solutions are compared with these benchmark ensemble average flux and k-eigenvalue solutions. Mixing statistics with large standard deviations give the most widely varying ensemble solutions of the flux and k-eigenvalue. The shape of the k-eigenvalue PDF qualitatively agrees with previous work. Its overall shape is independent of variations in fuel cross-sections for the problems considered, but its width is impacted by these variations. Statistical distributions with smaller standard deviations alter the shape of this PDF toward a normal distribution. The atomic mix approximation yields large over-predictions of the ensemble average k-eigenvalue and under-predictions of the flux. Qualitatively correct flux shapes are obtained in some cases. These benchmark calculations indicate that a model which includes higher statistical moments of the mixing statistics is needed for accurate predictions of binary
Linear filtering applied to Monte Carlo criticality calculations
International Nuclear Information System (INIS)
Morrison, G.W.; Pike, D.H.; Petrie, L.M.
1975-01-01
A significant improvement in the acceleration of the convergence of the eigenvalue computed by Monte Carlo techniques has been developed by applying linear filtering theory to Monte Carlo calculations for multiplying systems. A Kalman filter was applied to a KENO Monte Carlo calculation of an experimental critical system consisting of eight interacting units of fissile material. A comparison of the filter estimate and the Monte Carlo realization was made. The Kalman filter converged in five iterations to 0.9977. After 95 iterations, the average k-eff from the Monte Carlo calculation was 0.9981. This demonstrates that the Kalman filter has the potential of reducing the calculational effort of multiplying systems. Other examples and results are discussed
Calculation of HTR-10 first criticality with MVP
International Nuclear Information System (INIS)
Xie Jiachun; Yao Lianying
2015-01-01
The first criticality of 10 MW pebble-bed high temperature gas-cooled reactor-test module (HTR-10) was calculated with MVP. According to the characteristics of HTR-10, the Statistical Geometry Model of MVP was employed to describe the random arrangement of coated fuel particles in the fuel pebbles and the random distribution of the fuel and dummy pebbles in the core. Compared with previous results from VSOP and MCNP, the MVP results with JENDL-3.3 library were little more different, but the results with ENDF/B-Ⅵ.8 library were very close. The relative errors were less than 0.7%, compared with the first criticality experimental results. The study shows that MVP could be used in the physics calculations for pebble bed high temperature gas-cooled reactors. (authors)
Cluster monte carlo method for nuclear criticality safety calculation
International Nuclear Information System (INIS)
Pei Lucheng
1984-01-01
One of the most important applications of the Monte Carlo method is the calculation of the nuclear criticality safety. The fair source game problem was presented at almost the same time as the Monte Carlo method was applied to calculating the nuclear criticality safety. The source iteration cost may be reduced as much as possible or no need for any source iteration. This kind of problems all belongs to the fair source game prolems, among which, the optimal source game is without any source iteration. Although the single neutron Monte Carlo method solved the problem without the source iteration, there is still quite an apparent shortcoming in it, that is, it solves the problem without the source iteration only in the asymptotic sense. In this work, a new Monte Carlo method called the cluster Monte Carlo method is given to solve the problem further
International Nuclear Information System (INIS)
Preumont, A.; Shilab, S.; Cornaggia, L.; Reale, M.; Labbe, P.; Noe, H.
1992-01-01
This benchmark exercise is the continuation of the state-of-the-art review (EUR 11369 EN) which concluded that the random vibration approach could be an effective tool in seismic analysis of nuclear power plants, with potential advantages on time history and response spectrum techniques. As compared to the latter, the random vibration method provides an accurate treatment of multisupport excitations, non classical damping as well as the combination of high-frequency modal components. With respect to the former, the random vibration method offers direct information on statistical variability (probability distribution) and cheaper computations. The disadvantages of the random vibration method are that it is based on stationary results, and requires a power spectral density input instead of a response spectrum. A benchmark exercise to compare the three methods from the various aspects mentioned above, on one or several simple structures has been made. The following aspects have been covered with the simplest possible models: (i) statistical variability, (ii) multisupport excitation, (iii) non-classical damping. The random vibration method is therefore concluded to be a reliable method of analysis. Its use is recommended, particularly for preliminary design, owing to its computational advantage on multiple time history analysis
International Nuclear Information System (INIS)
Nojiri, I.; Fukasaku, Y.; Narita, O.
1994-01-01
A general purpose user's version of the EGS4 code system has been developed to make EGS4 easily applicable to the safety analysis of nuclear fuel cycle facilities. One such application involves the determination of skyshine dose for a variety of photon sources. To verify the accuracy of the code, it was benchmarked with Kansas State University (KSU) photon skyshine experiment of 1977. The results of the simulation showed that this version of EGS4 would be appicable to the skyshine calculation. (author)
Recent R and D around the Monte-Carlo code Tripoli-4 for criticality calculation
International Nuclear Information System (INIS)
Hugot, F.X.; Lee, Y.K.; Malvagi, F.
2008-01-01
TRIPOLI-4 [1] is the fourth generation of the TRIPOLI family of Monte Carlo codes developed from the 60's by CEA. It simulates the 3D transport of neutrons, photons, electrons and positrons as well as coupled neutron-photon propagation and electron-photons cascade showers. The code addresses radiation protection and shielding problems, as well as criticality and reactor physics problems through both critical and subcritical neutronics calculations. It uses full pointwise as well as multigroup cross-sections. The code has been validated through several hundred benchmarks as well as measurement campaigns. It is used as a reference tool by CEA as well as its industrial and institutional partners, and in the NURESIM [2] European project. Section 2 reviews its main features, with emphasis on the latest developments. Section 3 presents some recent R and D for criticality calculations. Fission matrix, Eigen-values and eigenvectors computations will be exposed. Corrections on the standard deviation estimator in the case of correlations between generation steps will be detailed. Section 4 presents some preliminary results obtained by the new mesh tally feature. The last section presents the interest of using XML format output files. (authors)
Energy Technology Data Exchange (ETDEWEB)
Chiang, Min-Han; Wang, Jui-Yu [Institute of Nuclear Engineering and Science, National Tsing Hua University, 101, Section 2, Kung-Fu Road, Hsinchu 30013, Taiwan (China); Sheu, Rong-Jiun, E-mail: rjsheu@mx.nthu.edu.tw [Institute of Nuclear Engineering and Science, National Tsing Hua University, 101, Section 2, Kung-Fu Road, Hsinchu 30013, Taiwan (China); Department of Engineering System and Science, National Tsing Hua University, 101, Section 2, Kung-Fu Road, Hsinchu 30013, Taiwan (China); Liu, Yen-Wan Hsueh [Institute of Nuclear Engineering and Science, National Tsing Hua University, 101, Section 2, Kung-Fu Road, Hsinchu 30013, Taiwan (China); Department of Engineering System and Science, National Tsing Hua University, 101, Section 2, Kung-Fu Road, Hsinchu 30013, Taiwan (China)
2014-05-01
The High Temperature Engineering Test Reactor (HTTR) in Japan is a helium-cooled graphite-moderated reactor designed and operated for the future development of high-temperature gas-cooled reactors. Two detailed full-core models of HTTR have been established by using SCALE6 and MCNP5/X, respectively, to study its neutronic properties. Several benchmark problems were repeated first to validate the calculation models. Careful code-to-code comparisons were made to ensure that two calculation models are both correct and equivalent. Compared with experimental data, the two models show a consistent bias of approximately 20–30 mk overestimation in effective multiplication factor for a wide range of core states. Most of the bias could be related to the ENDF/B-VII.0 cross-section library or incomplete modeling of impurities in graphite. After that, a series of systematic analyses was performed to investigate the effects of cross sections on the HTTR criticality and burnup calculations, with special interest in the comparison between continuous-energy and multigroup results. Multigroup calculations in this study were carried out in 238-group structure and adopted the SCALE double-heterogeneity treatment for resonance self-shielding. The results show that multigroup calculations tend to underestimate the system eigenvalue by a constant amount of ∼5 mk compared to their continuous-energy counterparts. Further sensitivity studies suggest the differences between multigroup and continuous-energy results appear to be temperature independent and also insensitive to burnup effects.
International Nuclear Information System (INIS)
Chiang, Min-Han; Wang, Jui-Yu; Sheu, Rong-Jiun; Liu, Yen-Wan Hsueh
2014-01-01
The High Temperature Engineering Test Reactor (HTTR) in Japan is a helium-cooled graphite-moderated reactor designed and operated for the future development of high-temperature gas-cooled reactors. Two detailed full-core models of HTTR have been established by using SCALE6 and MCNP5/X, respectively, to study its neutronic properties. Several benchmark problems were repeated first to validate the calculation models. Careful code-to-code comparisons were made to ensure that two calculation models are both correct and equivalent. Compared with experimental data, the two models show a consistent bias of approximately 20–30 mk overestimation in effective multiplication factor for a wide range of core states. Most of the bias could be related to the ENDF/B-VII.0 cross-section library or incomplete modeling of impurities in graphite. After that, a series of systematic analyses was performed to investigate the effects of cross sections on the HTTR criticality and burnup calculations, with special interest in the comparison between continuous-energy and multigroup results. Multigroup calculations in this study were carried out in 238-group structure and adopted the SCALE double-heterogeneity treatment for resonance self-shielding. The results show that multigroup calculations tend to underestimate the system eigenvalue by a constant amount of ∼5 mk compared to their continuous-energy counterparts. Further sensitivity studies suggest the differences between multigroup and continuous-energy results appear to be temperature independent and also insensitive to burnup effects
International Nuclear Information System (INIS)
Mahalakshmi, B.; Mohanakrishnan, P.
1993-01-01
Investigation were performed on the ZPPR-13A critical assembly to determine the cause of the radial variation of the calculated-to-experimental (C/E) ratio for control rod worth in large heterogeneous cores. The effects of errors in cross section, mesh size, group condensation, transport, and modeling were studied by studied by using two- and three-dimensional diffusion calculations and three-dimensional transport calculations. In that process, the cross-section set and the calculation scheme that are being used for fast reactor design in India have been revalidated. The cross-section set was found to yield satisfactory results. Three-dimensional calculations with adjusted and unadjusted cross sections confirmed that the error in cross sections was largely responsible for the radial dependence of the C/E ratios. The contributions from group condensation and mesh size errors were < 2%, and from modeling errors and transport correction, < 1%. The effect of these errors is insignificant when compared with the effect of the cross-section error. The analysis also showed that even without the adjustment in diffusion coefficient suggested in earlier studies, a satisfactory prediction is found, at least for this benchmark. The diffusion-to-transport correction for control rod worth was found to be -7%
Determining the nuclear data uncertainty on MONK10 and WIMS10 criticality calculations
Directory of Open Access Journals (Sweden)
Ware Tim
2017-01-01
Full Text Available The ANSWERS Software Service is developing a number of techniques to better understand and quantify uncertainty on calculations of the neutron multiplication factor, k-effective, in nuclear fuel and other systems containing fissile material. The uncertainty on the calculated k-effective arises from a number of sources, including nuclear data uncertainties, manufacturing tolerances, modelling approximations and, for Monte Carlo simulation, stochastic uncertainty. For determining the uncertainties due to nuclear data, a set of application libraries have been generated for use with the MONK10 Monte Carlo and the WIMS10 deterministic criticality and reactor physics codes. This paper overviews the generation of these nuclear data libraries by Latin hypercube sampling of JEFF-3.1.2 evaluated data based upon a library of covariance data taken from JEFF, ENDF/B, JENDL and TENDL evaluations. Criticality calculations have been performed with MONK10 and WIMS10 using these sampled libraries for a number of benchmark models of fissile systems. Results are presented which show the uncertainty on k-effective for these systems arising from the uncertainty on the input nuclear data.
Determining the nuclear data uncertainty on MONK10 and WIMS10 criticality calculations
Ware, Tim; Dobson, Geoff; Hanlon, David; Hiles, Richard; Mason, Robert; Perry, Ray
2017-09-01
The ANSWERS Software Service is developing a number of techniques to better understand and quantify uncertainty on calculations of the neutron multiplication factor, k-effective, in nuclear fuel and other systems containing fissile material. The uncertainty on the calculated k-effective arises from a number of sources, including nuclear data uncertainties, manufacturing tolerances, modelling approximations and, for Monte Carlo simulation, stochastic uncertainty. For determining the uncertainties due to nuclear data, a set of application libraries have been generated for use with the MONK10 Monte Carlo and the WIMS10 deterministic criticality and reactor physics codes. This paper overviews the generation of these nuclear data libraries by Latin hypercube sampling of JEFF-3.1.2 evaluated data based upon a library of covariance data taken from JEFF, ENDF/B, JENDL and TENDL evaluations. Criticality calculations have been performed with MONK10 and WIMS10 using these sampled libraries for a number of benchmark models of fissile systems. Results are presented which show the uncertainty on k-effective for these systems arising from the uncertainty on the input nuclear data.
Riding Bare-Back on unstructured meshes for 21. century criticality calculations - 244
International Nuclear Information System (INIS)
Kelley, K.C.; Martz, R.L.; Crane, D.L.
2010-01-01
MCNP has a new capability that permits tracking of neutrons and photons on an unstructured mesh which is embedded as a mesh universe within its legacy geometry capability. The mesh geometry is created through Abaqus/CAE using its solid modeling capabilities. Transport results are calculated for mesh elements through a path length estimator while element to element tracking is performed on the mesh. The results from MCNP can be exported to Abaqus/CAE for visualization or other-physics analysis. The simple Godiva criticality benchmark problem was tested with this new mesh capability. Computer run time is proportional to the number of mesh elements used. Both first and second order polyhedrons are used. Models that used second order polyhedrons produced slightly better results without significantly increasing computer run time. Models that used first order hexahedrons had shorter runtimes than models that used first order tetrahedrons. (authors)
Criticality calculations for a critical assembly, graphite moderate, using 20% enriched uranium
International Nuclear Information System (INIS)
Almeida Ferreira, A.C. de; Hukai, R.Y.
1975-01-01
The construction of a Zero Power Reactor (ZPR) at the Instituto de Energia Atomica in order to measure the neutron characteristics (parameters) of HTGR reactors is proposed. The necessary quantity fissile uranium for these measurements has been calculed. Criticality studies of graphite moderated critical assemblies containing thorium have been made and the critical mass of each of several typical commercial HTGR compositions has been calculated using computer codes HAMMER and CITATION. Assemblies investigated contained a central cylindrical core region, simulating a typical commercial HTGR composition, a uranium-graphite driver region and a outer pure graphite reflector region. It is concluded that a 10Kg inventory of fissile uranium will be required for a program of measurements utilizing each of the several calculated assemblies
Report on the benchmark of products & processes and ranking of cruciality and criticity
DEFF Research Database (Denmark)
Islam, Aminul
The objective of this deliverables is to present the results of benchmarking activities for each COTECH demonstrator and their planned production process. Each section is dedicated to a demonstrator mentioned below: Section 1 Innovative accommodable intra-ocular lens (BI) Section 2 Cheap substrat...... Micro socket for signal carriage of a hearing aid instruments (SONION) Section 8 Micro-cooling of electronic components (ATHERM)...
Accuracy of WWR-M criticality calculations with code MCU-RFFI
International Nuclear Information System (INIS)
Petrov, Yu.V.; Erykalov, A.N.; Onegin, M.S.
1999-01-01
The scattering and deviation of fuel element parameters by manufacturing, approximations of the reactor structure in the computer model, the partly inadequate neutron cross sections in the computer codes etc. lead to a discrepancy between the reactivity computations and data. We have compared reactivity calculations using the MCU-RRFI Monte Carlo code of critical assemblies containing WWR-M2 (36 enriched) an WWR-M5 (90%) fuel elements with benchmark experiments. The agreement was about Δρ≅±0.3%. A strong influence of the water ratio on reactivity was shown and a significant heterogeneous effect was found. We have also investigated, by full scale reactor calculations for the RETR program, the contribution to the reactivity of the main reactor structure elements: beryllium reflector, experimental channels irradiation devices inside the core, etc. Calculations show the importance of a more thorough study of the contributions of products of the (n, α) reaction in the Be reflector to the reactivity. Ways of improving the accuracy of the calculations are discussed. (author)
Accuracy of WWR-M criticality calculations with code MCU-RFFI
Energy Technology Data Exchange (ETDEWEB)
Petrov, Yu V [Petersburg Nuclear Physics Institute RAS, 188350 Gatchina, St. Petersburg (Russian Federation); Erykalov, A N; Onegin, M S [Petersburg Nuclear Physics Institute RAS, 188350 Gatchina, St. Petersburg (Russian Federation)
1999-10-01
The scattering and deviation of fuel element parameters by manufacturing, approximations of the reactor structure in the computer model, the partly inadequate neutron cross sections in the computer codes etc. lead to a discrepancy between the reactivity computations and data. We have compared reactivity calculations using the MCU-RRFI Monte Carlo code of critical assemblies containing WWR-M2 (36 enriched) an WWR-M5 (90%) fuel elements with benchmark experiments. The agreement was about {delta}{rho}{approx_equal}{+-}0.3%. A strong influence of the water ratio on reactivity was shown and a significant heterogeneous effect was found. We have also investigated, by full scale reactor calculations for the RETR program, the contribution to the reactivity of the main reactor structure elements: beryllium reflector, experimental channels irradiation devices inside the core, etc. Calculations show the importance of a more thorough study of the contributions of products of the (n, {alpha}) reaction in the Be reflector to the reactivity. Ways of improving the accuracy of the calculations are discussed. (author)
Directory of Open Access Journals (Sweden)
Tanaka Ken-ichi
2016-01-01
Full Text Available We performed benchmark calculation for radioactivity activated in a Primary Containment Vessel (PCV of a Boiling Water Reactor (BWR by using MAXS library, which was developed by collapsing with neutron energy spectra in the PCV of the BWR. Radioactivities due to neutron irradiation were measured by using activation foil detector of Gold (Au and Nickel (Ni at thirty locations in the PCV. We performed activation calculations of the foils with SCALE5.1/ORIGEN-S code with irradiation conditions of each foil location as the benchmark calculation. We compared calculations and measurements to estimate an effectiveness of MAXS library.
Criticality calculations in reactor accelerator coupling experiment (Race)
International Nuclear Information System (INIS)
Reda, M.A.; Spaulding, R.; Hunt, A.; Harmon, J.F.; Beller, D.E.
2005-01-01
A Reactor Accelerator Coupling Experiment (RACE) is to be performed at the Idaho State University Idaho Accelerator Center (IAC). The electron accelerator is used to generate neutrons by inducing Bremsstrahlung photon-neutron reactions in a Tungsten- Copper target. This accelerator/target system produces a source of ∼1012 n/s, which can initiate fission reactions in the subcritical system. This coupling experiment between a 40-MeV electron accelerator and a subcritical system will allow us to predict and measure coupling efficiency, reactivity, and multiplication. In this paper, the results of the criticality and multiplication calculations, which were carried out using the Monte Carlo radiation transport code MCNPX, for different coupling design options are presented. The fuel plate arrangements and the surrounding tank dimensions have been optimized. Criticality using graphite instead of water for reflector/moderator outside of the core region has been studied. The RACE configuration at the IAC will have a criticality (k-effective) of about 0,92 and a multiplication of about 10. (authors)
Han, Jeong-Hwan; Oda, Takuji
2018-04-01
The performance of exchange-correlation functionals in density-functional theory (DFT) calculations for liquid metal has not been sufficiently examined. In the present study, benchmark tests of Perdew-Burke-Ernzerhof (PBE), Armiento-Mattsson 2005 (AM05), PBE re-parameterized for solids, and local density approximation (LDA) functionals are conducted for liquid sodium. The pair correlation function, equilibrium atomic volume, bulk modulus, and relative enthalpy are evaluated at 600 K and 1000 K. Compared with the available experimental data, the errors range from -11.2% to 0.0% for the atomic volume, from -5.2% to 22.0% for the bulk modulus, and from -3.5% to 2.5% for the relative enthalpy depending on the DFT functional. The generalized gradient approximation functionals are superior to the LDA functional, and the PBE and AM05 functionals exhibit the best performance. In addition, we assess whether the error tendency in liquid simulations is comparable to that in solid simulations, which would suggest that the atomic volume and relative enthalpy performances are comparable between solid and liquid states but that the bulk modulus performance is not. These benchmark test results indicate that the results of liquid simulations are significantly dependent on the exchange-correlation functional and that the DFT functional performance in solid simulations can be used to roughly estimate the performance in liquid simulations.
Optical rotation calculated with time-dependent density functional theory: the OR45 benchmark.
Srebro, Monika; Govind, Niranjan; de Jong, Wibe A; Autschbach, Jochen
2011-10-13
Time-dependent density functional theory (TDDFT) computations are performed for 42 organic molecules and three transition metal complexes, with experimental molar optical rotations ranging from 2 to 2 × 10(4) deg cm(2) dmol(-1). The performances of the global hybrid functionals B3LYP, PBE0, and BHLYP, and of the range-separated functionals CAM-B3LYP and LC-PBE0 (the latter being fully long-range corrected), are investigated. The performance of different basis sets is studied. When compared to liquid-phase experimental data, the range-separated functionals do, on average, not perform better than B3LYP and PBE0. Median relative deviations between calculations and experiment range from 25 to 29%. A basis set recently proposed for optical rotation calculations (LPol-ds) on average does not give improved results compared to aug-cc-pVDZ in TDDFT calculations with B3LYP. Individual cases are discussed in some detail, among them norbornenone for which the LC-PBE0 functional produced an optical rotation that is close to available data from coupled-cluster calculations, but significantly smaller in magnitude than the liquid-phase experimental value. Range-separated functionals and BHLYP perform well for helicenes and helicene derivatives. Metal complexes pose a challenge to first-principles calculations of optical rotation.
Energy Technology Data Exchange (ETDEWEB)
Li, M [Wayne State Univeristy, Detroit, MI (United States); Chetty, I [Henry Ford Health System, Detroit, MI (United States); Zhong, H [Henry Ford Hospital System, Detroit, MI (United States)
2014-06-01
Purpose: Tumor control probability (TCP) calculated with accumulated radiation doses may help design appropriate treatment margins. Image registration errors, however, may compromise the calculated TCP. The purpose of this study is to develop benchmark CT images to quantify registration-induced errors in the accumulated doses and their corresponding TCP. Methods: 4DCT images were registered from end-inhale (EI) to end-exhale (EE) using a “demons” algorithm. The demons DVFs were corrected by an FEM model to get realistic deformation fields. The FEM DVFs were used to warp the EI images to create the FEM-simulated images. The two images combined with the FEM DVF formed a benchmark model. Maximum intensity projection (MIP) images, created from the EI and simulated images, were used to develop IMRT plans. Two plans with 3 and 5 mm margins were developed for each patient. With these plans, radiation doses were recalculated on the simulated images and warped back to the EI images using the FEM DVFs to get the accumulated doses. The Elastix software was used to register the FEM-simulated images to the EI images. TCPs calculated with the Elastix-accumulated doses were compared with those generated by the FEM to get the TCP error of the Elastix registrations. Results: For six lung patients, the mean Elastix registration error ranged from 0.93 to 1.98 mm. Their relative dose errors in PTV were between 0.28% and 6.8% for 3mm margin plans, and between 0.29% and 6.3% for 5mm-margin plans. As the PTV margin reduced from 5 to 3 mm, the mean TCP error of the Elastix-reconstructed doses increased from 2.0% to 2.9%, and the mean NTCP errors decreased from 1.2% to 1.1%. Conclusion: Patient-specific benchmark images can be used to evaluate the impact of registration errors on the computed TCPs, and may help select appropriate PTV margins for lung SBRT patients.
International Nuclear Information System (INIS)
Li, M; Chetty, I; Zhong, H
2014-01-01
Purpose: Tumor control probability (TCP) calculated with accumulated radiation doses may help design appropriate treatment margins. Image registration errors, however, may compromise the calculated TCP. The purpose of this study is to develop benchmark CT images to quantify registration-induced errors in the accumulated doses and their corresponding TCP. Methods: 4DCT images were registered from end-inhale (EI) to end-exhale (EE) using a “demons” algorithm. The demons DVFs were corrected by an FEM model to get realistic deformation fields. The FEM DVFs were used to warp the EI images to create the FEM-simulated images. The two images combined with the FEM DVF formed a benchmark model. Maximum intensity projection (MIP) images, created from the EI and simulated images, were used to develop IMRT plans. Two plans with 3 and 5 mm margins were developed for each patient. With these plans, radiation doses were recalculated on the simulated images and warped back to the EI images using the FEM DVFs to get the accumulated doses. The Elastix software was used to register the FEM-simulated images to the EI images. TCPs calculated with the Elastix-accumulated doses were compared with those generated by the FEM to get the TCP error of the Elastix registrations. Results: For six lung patients, the mean Elastix registration error ranged from 0.93 to 1.98 mm. Their relative dose errors in PTV were between 0.28% and 6.8% for 3mm margin plans, and between 0.29% and 6.3% for 5mm-margin plans. As the PTV margin reduced from 5 to 3 mm, the mean TCP error of the Elastix-reconstructed doses increased from 2.0% to 2.9%, and the mean NTCP errors decreased from 1.2% to 1.1%. Conclusion: Patient-specific benchmark images can be used to evaluate the impact of registration errors on the computed TCPs, and may help select appropriate PTV margins for lung SBRT patients
International Nuclear Information System (INIS)
Paratte, J.M.; Frueh, R.; Kasemeyer, U.; Kalugin, M.A.; Timm, W.; Chawla, R.
2006-01-01
Measurements in the CROCUS reactor at EPFL, Lausanne, are reported for the critical water level and the inverse reactor period for several different sets of delayed supercritical conditions. The experimental configurations were also calculated by four different calculation methods. For each of the supercritical configurations, the absolute reactivity value has been determined in two different ways, viz.: (i) through direct comparison of the multiplication factor obtained employing a given calculation method with the corresponding value for the critical case (calculated reactivity: ρ calc ); (ii) by application of the inhour equation using the kinetic parameters obtained for the critical configuration and the measured inverse reactor period (measured reactivity: ρ meas ). The calculated multiplication factors for the reference critical configuration, as well as ρ calc for the supercritical cases, are found to be in good agreement. However, the values of ρ meas produced by two of the applied calculation methods differ appreciably from the corresponding ρ calc values, clearly indicating deficiencies in the kinetic parameters obtained from these methods
Automatic fission source convergence criteria for Monte Carlo criticality calculations
International Nuclear Information System (INIS)
Shim, Hyung Jin; Kim, Chang Hyo
2005-01-01
The Monte Carlo criticality calculations for the multiplication factor and the power distribution in a nuclear system require knowledge of stationary or fundamental-mode fission source distribution (FSD) in the system. Because it is a priori unknown, so-called inactive cycle Monte Carlo (MC) runs are performed to determine it. The inactive cycle MC runs should be continued until the FSD converges to the stationary FSD. Obviously, if one stops them prematurely, the MC calculation results may have biases because the followup active cycles may be run with the non-stationary FSD. Conversely, if one performs the inactive cycle MC runs more than necessary, one is apt to waste computing time because inactive cycle MC runs are used to elicit the fundamental-mode FSD only. In the absence of suitable criteria for terminating the inactive cycle MC runs, one cannot but rely on empiricism in deciding how many inactive cycles one should conduct for a given problem. Depending on the problem, this may introduce biases into Monte Carlo estimates of the parameters one tries to calculate. The purpose of this paper is to present new fission source convergence criteria designed for the automatic termination of inactive cycle MC runs
International Nuclear Information System (INIS)
Liem, Peng Hong; Sembiring, Tagor Malem
2012-01-01
Highlights: ► Benchmark calculations of the new JENDL-4.0 library. ► Thermal research reactor with oxide LEU fuel, H 2 O moderator and Be reflector. ► JENDL-4.0 library shows better C/E values for criticality evaluations. - Abstract: Benchmark calculations of the new JENDL-4.0 library on the criticality experiments of a thermal research reactor with oxide low enriched uranium (LEU, 20 w/o) fuel, light water moderator and beryllium reflector (RSG GAS) have been conducted using a continuous energy Monte Carlo code, MVP-II. The JENDL-4.0 library shows better C/E values compared to the former library JENDL-3.3 and other world-widely used latest libraries (ENDF/B-VII.0 and JEFF-3.1).
Some benchmark calculations for VVER-1000 assemblies by WIMS-7B code
International Nuclear Information System (INIS)
Sultanov, N.V.
2001-01-01
Our aim in this report is to compare of calculation results, obtained with the use of different libraries, which are in the variant of the WIMS7B code. We had the three libraries: the 1986 library is based on the UKNDL files, the two 1996 libraries are based on the JEF-2.2 files, the one having the 69 group approximation, the other having the 172 group approximation. We wanted also to have some acquaintance with the new option of WIMS-7B - CACTUS. The variant of WIMS-7B was placed at our disposal by the code authors for a temporal use for 9 months. It was natural to make at comparisons with analogous values of TVS-M, MCU, Apollo-2, Casmo-4, Conkemo, MCNP, HELIOS codes, where the other different libraries were used. In accordance with our aims the calculations of unprofiled and profiled assemblies of the VVER-1000 reactor have been carried out by the option CACTUS. This option provides calculations by the characteristics method. The calculation results have been compared with the K ∞ values obtained by other codes in work. The conclusion from this analysis is such: the methodical parts of errors of these codes have nearly the same values. Spacing for K eff values can be explained of the library microsections differences mainly. Nevertheless, the more detailed analysis of the results obtained is required. In conclusion the calculation of a depletion of VVER-1000 cell has been carried out. The comparison of the dependency of the multiply factor from the depletion obtained by WIMS-7B with different libraries and by the TVS-M, MCU, HELIOS and WIMS-ABBN codes in work has been performed. (orig.)
HEXTRAN-SMABRE calculation of the 6th AER Benchmark, main steam line break in a WWER-440 NPP
International Nuclear Information System (INIS)
Haemaelaeinen, A.; Kyrki-Rajamaeki, R.
2003-01-01
The sixth AER benchmark is the second AER benchmark for couplings of the thermal hydraulic codes and three dimensional neutron kinetic core models. It concerns a double end break of one main steam line in a WWER-440 plant. The core is at the end of its first cycle in full power conditions. In VTT HEXTRAN2.9 is used for the core kinetics and dynamics and SMABRE4.8 as a thermal hydraulic model for the primary and secondary loop. The plant model for SMABRE consists mainly of two input models, Loviisa model and a standard WWER-440/213 plant model. The primary loop includes six separate loops, the pressure vessel is divided into six parallel channels in SMABRE and the whole core calculation is performed in the core with HEXTRAN. The horizontal steam generators are modelled with heat transfer tubes in five levels and vertically with two parts, riser and downcomer. With this kind of detailed modelling of steam generators there occurs strong flashing after break opening. As a sequence of the main steam line break at nominal power level, the reactor trip is followed quite soon. The liquid temperature continues to decrease in one core inlet sector which may lead to recriticality and neuron power increase. The situation is very sensitive to small changes in the steam generator and break flow modelling and therefore several sensitivity calculations have been done. Also two stucked control rods have been assumed. Due to boric acid concentration in the high pressure safety injection subcriticality is finally guaranteed in the transient (Authors)
Energy Technology Data Exchange (ETDEWEB)
Epifanovsky, Evgeny [Department of Chemistry, University of Southern California, Los Angeles, California 90089-0482 (United States); Department of Chemistry, University of California, Berkeley, California 94720 (United States); Q-Chem Inc., 6601 Owens Drive, Suite 105, Pleasanton, California 94588 (United States); Klein, Kerstin; Gauss, Jürgen [Institut für Physikalische Chemie, Universität Mainz, D-55099 Mainz (Germany); Stopkowicz, Stella [Department of Chemistry, Centre for Theoretical and Computational Chemistry, University of Oslo, N-0315 Oslo (Norway); Krylov, Anna I. [Department of Chemistry, University of Southern California, Los Angeles, California 90089-0482 (United States)
2015-08-14
We present a formalism and an implementation for calculating spin-orbit couplings (SOCs) within the EOM-CCSD (equation-of-motion coupled-cluster with single and double substitutions) approach. The following variants of EOM-CCSD are considered: EOM-CCSD for excitation energies (EOM-EE-CCSD), EOM-CCSD with spin-flip (EOM-SF-CCSD), EOM-CCSD for ionization potentials (EOM-IP-CCSD) and electron attachment (EOM-EA-CCSD). We employ a perturbative approach in which the SOCs are computed as matrix elements of the respective part of the Breit-Pauli Hamiltonian using zeroth-order non-relativistic wave functions. We follow the expectation-value approach rather than the response-theory formulation for property calculations. Both the full two-electron treatment and the mean-field approximation (a partial account of the two-electron contributions) have been implemented and benchmarked using several small molecules containing elements up to the fourth row of the periodic table. The benchmark results show the excellent performance of the perturbative treatment and the mean-field approximation. When used with an appropriate basis set, the errors with respect to experiment are below 5% for the considered examples. The findings regarding basis-set requirements are in agreement with previous studies. The impact of different correlation treatment in zeroth-order wave functions is analyzed. Overall, the EOM-IP-CCSD, EOM-EA-CCSD, EOM-EE-CCSD, and EOM-SF-CCSD wave functions yield SOCs that agree well with each other (and with the experimental values when available). Using an EOM-CCSD approach that provides a more balanced description of the target states yields more accurate results.
Benchmarking of Touschek Beam Lifetime Calculations for the Advanced Photon Source
Energy Technology Data Exchange (ETDEWEB)
Xiao, A.; Yang, B.
2017-06-25
Particle loss from Touschek scattering is one of the most significant issues faced by present and future synchrotron light source storage rings. For example, the predicted, Touschek-dominated beam lifetime for the Advanced Photon Source (APS) Upgrade lattice in 48-bunch, 200-mA timing mode is only ~ 2 h. In order to understand the reliability of the predicted lifetime, a series of measurements with various beam parameters was performed on the present APS storage ring. This paper first describes the entire process of beam lifetime measurement, then compares measured lifetime with the calculated one by applying the measured beam parameters. The results show very good agreement.
Beridze, George; Kowalski, Piotr M
2014-12-18
Ability to perform a feasible and reliable computation of thermochemical properties of chemically complex actinide-bearing materials would be of great importance for nuclear engineering. Unfortunately, density functional theory (DFT), which on many instances is the only affordable ab initio method, often fails for actinides. Among various shortcomings, it leads to the wrong estimate of enthalpies of reactions between actinide-bearing compounds, putting the applicability of the DFT approach to the modeling of thermochemical properties of actinide-bearing materials into question. Here we test the performance of DFT+U method--a computationally affordable extension of DFT that explicitly accounts for the correlations between f-electrons - for prediction of the thermochemical properties of simple uranium-bearing molecular compounds and solids. We demonstrate that the DFT+U approach significantly improves the description of reaction enthalpies for the uranium-bearing gas-phase molecular compounds and solids and the deviations from the experimental values are comparable to those obtained with much more computationally demanding methods. Good results are obtained with the Hubbard U parameter values derived using the linear response method of Cococcioni and de Gironcoli. We found that the value of Coulomb on-site repulsion, represented by the Hubbard U parameter, strongly depends on the oxidation state of uranium atom. Last, but not least, we demonstrate that the thermochemistry data can be successfully used to estimate the value of the Hubbard U parameter needed for DFT+U calculations.
Nuclear criticality safety calculational analysis for small-diameter containers
International Nuclear Information System (INIS)
LeTellier, M.S.; Smallwood, D.J.; Henkel, J.A.
1995-11-01
This report documents calculations performed to establish a technical basis for the nuclear criticality safety of favorable geometry containers, sometimes referred to as 5-inch containers, in use at the Portsmouth Gaseous Diffusion Plant. A list of containers currently used in the plant is shown in Table 1.0-1. These containers are currently used throughout the plant with no mass limits. The use of containers with geometries or material types other than those addressed in this evaluation must be bounded by this analysis or have an additional analysis performed. The following five basic container geometries were modeled and bound all container geometries in Table 1.0-1: (1) 4.32-inch-diameter by 50-inch-high polyethylene bottle; (2) 5.0-inch-diameter by 24-inch-high polyethylene bottle; (3) 5.25-inch-diameter by 24-inch-high steel can (open-quotes F-canclose quotes); (4) 5.25-inch-diameter by 15-inch-high steel can (open-quotes Z-canclose quotes); and (5) 5.0-inch-diameter by 9-inch-high polybottle (open-quotes CO-4close quotes). Each container type is evaluated using five basic reflection and interaction models that include single containers and multiple containers in normal and in credible abnormal conditions. The uranium materials evaluated are UO 2 F 2 +H 2 O and UF 4 +oil materials at 100% and 10% enrichments and U 3 O 8 , and H 2 O at 100% enrichment. The design basis safe criticality limit for the Portsmouth facility is k eff + 2σ < 0.95. The KENO study results may be used as the basis for evaluating general use of these containers in the plant
DEFF Research Database (Denmark)
Cai, Xiao-Xiao; Llamas-Jansa, Isabel; Mullet, Steven
2013-01-01
Geant4 is an open source general purpose simulation toolkit for particle transportation in matter. Since the extension of the thermal scattering model in Geant4.9.5 and the availability of the IAEA HP model cross section libraries, it is now possible to extend the application area of Geant4......, U and O in uranium dioxide, Al metal, Be metal, and Fe metal. The native HP cross section library G4NDL does not include data for elements with atomic number larger than 92. Therefore, transuranic elements, which have impacts for a realistic reactor, can not be simulated by the combination of the HP...... models and the G4NDL library. However, cross sections of those missing isotopes were made available recently through the IAEA project “new evaluated neutron cross section libraries for Geant4”....
International Nuclear Information System (INIS)
Peterson, K.A.; Dunning, T.H. Jr.
1995-01-01
The hydrogen bond energy and geometry of the HF dimer have been investigated using the series of correlation consistent basis sets from aug-cc-pVDZ to aug-cc-pVQZ and several theoretical methods including Moller--Plesset perturbation and coupled cluster theories. Estimates of the complete basis set (CBS) limit have been derived for the binding energy of (HF) 2 at each level of theory by utilizing the regular convergence characteristics of the correlation consistent basis sets. CBS limit hydrogen bond energies of 3.72, 4.53, 4.55, and 4.60 kcal/mol are estimated at the SCF, MP2, MP4, and CCSD(T) levels of theory, respectively. CBS limits for the intermolecular F--F distance are estimated to be 2.82, 2.74, 2.73, and 2.73 A, respectively, for the same correlation methods. The effects of basis set superposition error (BSSE) on both the binding energies and structures have also been investigated for each basis set using the standard function counterpoise (CP) method. While BSSE has a negligible effect on the intramolecular geometries, the CP-corrected F--F distance and binding energy differ significantly from the uncorrected values for the aug-cc-pVDZ basis set; these differences decrease regularly with increasing basis set size, yielding the same limits in the CBS limit. Best estimates for the equilibrium properties of the HF dimer from CCSD(T) calculations are D e =4.60 kcal/mol, R FF =2.73 A, r 1 =0.922 A, r 2 =0.920 A, Θ 1 =7 degree, and Θ 2 =111 degree
International Nuclear Information System (INIS)
Okumura, Keisuke; Nagaya, Yasunobu
2011-09-01
In May 2010, JENDL-4.0 was released from Japan Atomic Energy Agency as the updated Japanese Nuclear Data Library. It was processed by the nuclear data processing system LICEM and an arbitrary-temperature neutron cross section library MVPlib - nJ40 was produced for the neutron and photon transport calculation code MVP based on the continuous-energy Monte Carlo method. The library contains neutron cross sections for 406 nuclides on the free gas model, thermal scattering cross sections, and cross sections of pseudo fission products for burn-up calculations with MVP. Criticality benchmark calculations were carried out with MVP and MVPlib - nJ40 for about 1,000 cases of critical experiments stored in the hand book of International Criticality Safety Benchmark Evaluation Project (ICSBEP), which covers a wide variety of fuel materials, fuel forms, and neutron spectra. We report all comparison results (C/E values) of effective neutron multiplication factors between calculations and experiments to give a validation data for the prediction accuracy of JENDL-4.0 for criticalities. (author)
Improved estimation of the variance in Monte Carlo criticality calculations
International Nuclear Information System (INIS)
Hoogenboom, J. Eduard
2008-01-01
Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k eff results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k eff will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k eff are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)
Improved estimation of the variance in Monte Carlo criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J. Eduard [Delft University of Technology, Delft (Netherlands)
2008-07-01
Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k{sub eff} results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k{sub eff} will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k{sub eff} are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)
Cornelius, Iwan; Guatelli, Susanna; Fournier, Pauline; Crosbie, Jeffrey C; Sanchez Del Rio, Manuel; Bräuer-Krisch, Elke; Rosenfeld, Anatoly; Lerch, Michael
2014-05-01
Microbeam radiation therapy (MRT) is a synchrotron-based radiotherapy modality that uses high-intensity beams of spatially fractionated radiation to treat tumours. The rapid evolution of MRT towards clinical trials demands accurate treatment planning systems (TPS), as well as independent tools for the verification of TPS calculated dose distributions in order to ensure patient safety and treatment efficacy. Monte Carlo computer simulation represents the most accurate method of dose calculation in patient geometries and is best suited for the purpose of TPS verification. A Monte Carlo model of the ID17 biomedical beamline at the European Synchrotron Radiation Facility has been developed, including recent modifications, using the Geant4 Monte Carlo toolkit interfaced with the SHADOW X-ray optics and ray-tracing libraries. The code was benchmarked by simulating dose profiles in water-equivalent phantoms subject to irradiation by broad-beam (without spatial fractionation) and microbeam (with spatial fractionation) fields, and comparing against those calculated with a previous model of the beamline developed using the PENELOPE code. Validation against additional experimental dose profiles in water-equivalent phantoms subject to broad-beam irradiation was also performed. Good agreement between codes was observed, with the exception of out-of-field doses and toward the field edge for larger field sizes. Microbeam results showed good agreement between both codes and experimental results within uncertainties. Results of the experimental validation showed agreement for different beamline configurations. The asymmetry in the out-of-field dose profiles due to polarization effects was also investigated, yielding important information for the treatment planning process in MRT. This work represents an important step in the development of a Monte Carlo-based independent verification tool for treatment planning in MRT.
International Nuclear Information System (INIS)
Kooyman, Timothee; Messaoudia, Nadia
2014-01-01
A sensitivity study on a set of evaluated criticality benchmarks with two versions of the JEFF nuclear data library, namely JEFF-3.1.2 and JEFF-3.2T, and ENDF/B-VII.1 was performed using MNCP(X) 2.6.0. As these benchmarks serve to estimate the upper safety limit for criticality risk analysis at SCK.CEN the sensitivity of their results to nuclear data is an important parameter to asses. Several nuclides were identified as being responsible for an evident change in the effective multiplication factor k eff : 235 U, 239 Pu, 240 Pu, 54 Fe, 56 Fe, 57 Fe and 208 Pb. A high sensitivity was found to the fission cross-section of all the fissile material in the study. Additionally, a smaller sensitivity to inelastic and capture cross-section of 235 U and 240 Pu was also found. Sensitivity to the scattering law for non-fissile material was postulated. The biggest change in the k eff due to non-fissile material was due to 208 Pb evaluation (±700 pcm), followed by 56 Fe (±360 pcm) for both versions of the JEFF library. Changes due to 235 U (±300 pcm) and Pu isotopes (±120 pcm for 239 Pu and ±80 pcm for 240 Pu) were found only with JEFF-3.1.2. 238 U was found to have no effect on the k eff . Significant improvements were identified between the two versions of the JEFF library. No further differences were found between the JEFF-3.2T and the ENDF/B-VII.1 calculations involving 235 U or Pu. (authors)
Energy Technology Data Exchange (ETDEWEB)
Bess, J. D.; Briggs, J. B.; Gulliford, J.; Ivanova, T.; Rozhikhin, E. V.; Semenov, M. Yu.; Tsibulya, A. M.; Koscheev, V. N.
2017-07-01
is the critical experiments with fast reactor fuel rods in water, interesting in terms of justification of nuclear safety during transportation and storage of fresh and spent fuel. These reports provide a detailed review of the experiment, designate the area of their application and include results of calculations on modern systems of constants in comparison with the estimated experimental data.
International Nuclear Information System (INIS)
Cho, Moon-Sung; Kim, Y. M.; Lee, Y. W.; Jeong, K. C.; Kim, Y. K.; Oh, S. C.
2006-01-01
The fundamental design for a gas-cooled reactor relies on an understanding of the behavior of a coated particle fuel. KAERI, which has been carrying out the Korean VHTR (Very High Temperature modular gas cooled Reactor) Project since 2004, is developing a fuel performance analysis code for a VHTR named COPA (COated Particle fuel Analysis). COPA predicts temperatures, stresses, a fission gas release and failure probabilities of a coated particle fuel in normal operating conditions. Validation of COPA in the process of its development is realized partly by participating in the benchmark section of the international CRP-6 program led by IAEA which provides comprehensive benchmark problems and analysis results obtained from the CRP-6 member countries. Apart from the validation effort through the CRP-6, a validation of COPA was attempted by comparing its benchmark results with the visco-elastic solutions obtained from the ABAQUS code calculations for the same CRP-6 TRISO coated particle benchmark problems involving creep, swelling, and pressure. The study shows the calculation results of the IAEA-CRP-6 benchmark cases 5 through 7 by using the ABAQUS FE model for a comparison with the COPA results
Five radionuclide vadose zone models with different degrees of complexity (CHAIN, MULTIMED_DP, FECTUZ, HYDRUS, and CHAIN 2D) were selected for use in soil screening level (SSL) calculations. A benchmarking analysis between the models was conducted for a radionuclide (99Tc) rele...
International Nuclear Information System (INIS)
Svarny, J.; Mikolas, P.
1999-01-01
The simple model of two component concept of the ATW (graphite + molten salt system) was found. The main purpose of this benchmark will be not only to provide the basic characteristics of given ADS but also to test codes in calculations of the rate of transmutation waste and to evaluate basic kinetics parameters and reactivity effects. (Authors)
Monte Carlo benchmark calculations of energy deposition by electron/photon showers up to 1 GeV
International Nuclear Information System (INIS)
Mehlhorn, T.A.; Halbleib, J.A.
1983-01-01
Over the past several years the TIGER series of coupled electron/photon Monte Carlo transport codes has been applied to a variety of problems involving nuclear and space radiations, electron accelerators, and radioactive sources. In particular, they have been used at Sandia to simulate the interaction of electron beams, generated by pulsed-power accelerators, with various target materials for weapons effect simulation, and electron beam fusion. These codes are based on the ETRAN system which was developed for an energy range from about 10 keV up to a few tens of MeV. In this paper we will discuss the modifications that were made to the TIGER series of codes in order to extend their applicability to energies of interest to the high energy physics community (up to 1 GeV). We report the results of a series of benchmark calculations of the energy deposition by high energy electron beams in various materials using the modified codes. These results are then compared with the published results of various experimental measurements and other computational models
MCNP benchmark analyses of critical experiments for space nuclear thermal propulsion
International Nuclear Information System (INIS)
Selcow, E.C.; Cerbone, R.J.; Ludewig, H.
1993-01-01
The particle-bed reactor (PBR) system is being developed for use in the Space Nuclear Thermal Propulsion (SNTP) Program. This reactor system is characterized by a highly heterogeneous, compact configuration with many streaming pathways. The neutronics analyses performed for this system must be able to accurately predict reactor criticality, kinetics parameters, material worths at various temperatures, feedback coefficients, and detailed fission power and heating distributions. The latter includes coupled axial, radial, and azimuthal profiles. These responses constitute critical inputs and interfaces with the thermal-hydraulics design and safety analyses of the system
Benchmarks of subcriticality in accelerator-driven system at Kyoto University Critical Assembly
Directory of Open Access Journals (Sweden)
Cheol Ho Pyeon
2017-09-01
Full Text Available Basic research on the accelerator-driven system is conducted by combining 235U-fueled and 232Th-loaded cores in the Kyoto University Critical Assembly with the pulsed neutron generator (14 MeV neutrons and the proton beam accelerator (100 MeV protons with a heavy metal target. The results of experimental subcriticality are presented with a wide range of subcriticality level between near critical and 10,000 pcm, as obtained by the pulsed neutron source method, the Feynman-α method, and the neutron source multiplication method.
Measurements in Los Alamos benchmark criticals and the central reactivity discrepancy
International Nuclear Information System (INIS)
Davey, W.G.; Hansen, G.E.; Koelling, J.J.; McLaughlin, T.P.
1978-01-01
Measurements in seven Los Alamos fast critical facilities are described; all are related to elucidating the causes of the central reactivity discrepancy in fast reactors. Specific capabilities of these specialized assemblies permit measurements well-above delayed critical and these confirm the validity of the delayed neutron data used for calibration; there is therefore no reactivity-scale error. Reactivity measurements in these homogeneous assemblies exhibit no discrepancy. It is concluded that nuclear data should not be adjusted to eliminate the discrepancy found in other, heterogeneous assemblies
International Nuclear Information System (INIS)
Hossny, K.
2015-01-01
The purpose of this work is to validate MCNP5 libraries by simulating 4 detailed benchmark experiments and comparing MCNP5 results (each library) with the experimental results and also the previously validated codes for the same experiments MORET 4.A coupled with APOLLO2 (France), and MONK8 (UK). The reasons for difference between libraries are also investigated in this work. Investigating the reason for the differences between libraries will be done by specifying a different library for specific part (clad, fuel, light water) and checking the result deviation than the previously calculated result (with all parts of the same library). The investigated benchmark experiments are of single fuel rods arrays that are water-moderated and water-reflected. Rods contained low-enriched (4.738 wt.% 92 235 U)uranium dioxide (UO 2 ) fuel were clad with aluminum alloy AGS. These experiments were subcritical approaches extrapolated to critical, with the multiplication factor reached being very close to 1.000 (within 0.1%); the subcritical approach parameter was the water level. The studied four cases differ from each other in pitch, number of fuel rods and of course critical height of water. The results show that although library ENDF/B-IV lacks light water treatment card, however its results can be reliable as light water treatment library does not have significant differences from library to another, so it will not be necessary to specify light water treatment card. The main reason for differences between ENDF/B-V and ENDF/B-VI is light water material, especially the Hydrogen element. Specifying the library of Uranium is necessary in case of using library ENDF/B-IV. On the other hand it is not necessary to specify library of cladding material whatever the used library. Validated libraries are ENDF/BIV, ENDF/B-V and ENDF/B-VI with codes in MCNP 42C, 50C and 60C respectively. The presentation slides have been added to the article
Energy Technology Data Exchange (ETDEWEB)
Gruppelaar, H. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Klippel, H.T. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Kloosterman, J.L. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Hoogenboom, J.E. [Technische Univ. Delft (Netherlands). Interfacultair Reactor Instituut; Leege, P.F.A. de [Technische Univ. Delft (Netherlands). Interfacultair Reactor Instituut; Verhagen, F.C.M. [Keuring van Electrotechnische Materialen NV, Arnhem (Netherlands); Bruggink, J.C. [Gemeenschappelijke Kernenergiecentrale Nederland N.V., Dodewaard (Netherlands)
1993-11-01
Benchmark results of the Dutch PINK working group on calculational benchmarks on single pin cell and multipin assemblies as defined by EPRI are presented and evaluated. First a short update of methods used by the various institutes involved is given as well as an update of the status with respect to previous performed pin-cell calculations. Problems detected in previous pin-cell calculations are inspected more closely. Detailed discussion of results of multipin assembly calculations is given. The assembly consists of 9 pins in a multicell square lattice in which the central pin is filled differently, i.e. a Gd pin for the BWR assembly and a control rod/guide tube for the PWR assembly. The results for pin cells showed a rather good overall agreement between the four participants although BWR pins with high void fraction turned out to be difficult to calculate. With respect to burnup calculations good overall agreement for the reactivity swing was obtained, provided that a fine time grid is used. (orig.)
International Nuclear Information System (INIS)
Gruppelaar, H.; Klippel, H.T.; Kloosterman, J.L.; Hoogenboom, J.E.; Bruggink, J.C.
1993-11-01
Benchmark results of the Dutch PINK working group on calculational benchmarks on single pin cell and multipin assemblies as defined by EPRI are presented and evaluated. First a short update of methods used by the various institutes involved is given as well as an update of the status with respect to previous performed pin-cell calculations. Problems detected in previous pin-cell calculations are inspected more closely. Detailed discussion of results of multipin assembly calculations is given. The assembly consists of 9 pins in a multicell square lattice in which the central pin is filled differently, i.e. a Gd pin for the BWR assembly and a control rod/guide tube for the PWR assembly. The results for pin cells showed a rather good overall agreement between the four participants although BWR pins with high void fraction turned out to be difficult to calculate. With respect to burnup calculations good overall agreement for the reactivity swing was obtained, provided that a fine time grid is used. (orig.)
Subcriticality calculations for the FFTF reverse approach to critical experiment
International Nuclear Information System (INIS)
Selby, D.L.; Flanagan, G.F.
1975-01-01
The reverse approach to critical (RAC) experiments were performed in the ZPR-IX critical facility at Argonne National Laboratory. One of the major objectives of this project is to determine the adequacy of the low-level flux monitor (LLFM) detectors for initial loading of the Fast Flux Test Facility (FFTF). 5 references
Energy Technology Data Exchange (ETDEWEB)
Kotsarev, Alexander; Lizorkin, Mikhail [National Research Centre ' Kurchatov Institute' , Moscow (Russian Federation); Bencik, Marek; Hadek, Jan [UJV Rez, a.s., Rez (Czech Republic); Kozmenkov, Yaroslav; Kliem, Soeren [Helmholtz-Zentrum Dresden-Rossendorf (HZDR) e.V., Dresden (Germany)
2016-09-15
The 7th AER dynamic benchmark is a continuation of the efforts to validate the codes systematically for the estimation of the transient behavior of VVER type nuclear power plants. The main part of the benchmark is the simulation of the re-connection of an isolated circulation loop with low temperature in a VVER-440 plant. This benchmark was calculated by the National Research Centre ''Kurchatov Institute'' (with the code ATHLET/BIPR-VVER), UJV Rez (with the code RELAP5-3D {sup copyright}) and HZDR (with the code DYN3D/ATHLET). The paper gives an overview of the behavior of the main thermal hydraulic and neutron kinetic parameters in the provided solutions.
Start-up of a cold loop in a VVER-440, the 7th AER benchmark calculation with HEXTRAN-SMABRE-PORFLO
International Nuclear Information System (INIS)
Hovi, Ville; Taivassalo, Veikko; Haemaelaeinen, Anitta; Raety, Hanna; Syrjaelahti, Elina
2017-01-01
The 7 th dynamic AER benchmark is the first in which three-dimensional thermal hydraulics codes are supposed to be applied. The aim is to get a more precise core inlet temperature profile than the sector temperatures available typically with system codes. The benchmark consists of a start-up of the sixth, isolated loop in a VVER-440 plant. The isolated loop initially contains cold water without boric acid and the start-up leads to a somewhat asymmetrical core power increase due to feedbacks in the core. In this study, the 7 th AER benchmark is calculated with the three-dimensional nodal reactor dynamics code HEXTRAN-SMABRE coupled with the porous computational fluid dynamics code PORFLO. These three codes are developed at VTT. A novel two-way coupled simulation of the 7 th AER benchmark was performed successfully demonstrating the feasibility and advantages of the new reactor analysis framework. The modelling issues for this benchmark are reported and some evaluation against the previously reported comparisons between the system codes is provided.
Energy Technology Data Exchange (ETDEWEB)
Hovi, Ville; Taivassalo, Veikko; Haemaelaeinen, Anitta; Raety, Hanna; Syrjaelahti, Elina [VTT Technical Research Centre of Finland Ltd, VTT (Finland)
2017-09-15
The 7{sup th} dynamic AER benchmark is the first in which three-dimensional thermal hydraulics codes are supposed to be applied. The aim is to get a more precise core inlet temperature profile than the sector temperatures available typically with system codes. The benchmark consists of a start-up of the sixth, isolated loop in a VVER-440 plant. The isolated loop initially contains cold water without boric acid and the start-up leads to a somewhat asymmetrical core power increase due to feedbacks in the core. In this study, the 7{sup th} AER benchmark is calculated with the three-dimensional nodal reactor dynamics code HEXTRAN-SMABRE coupled with the porous computational fluid dynamics code PORFLO. These three codes are developed at VTT. A novel two-way coupled simulation of the 7{sup th} AER benchmark was performed successfully demonstrating the feasibility and advantages of the new reactor analysis framework. The modelling issues for this benchmark are reported and some evaluation against the previously reported comparisons between the system codes is provided.
Benchmark critical experiments on low-enriched uranium oxide systems with H/U = 0.77
International Nuclear Information System (INIS)
Tuck, G.; Oh, I.
1979-08-01
Ten benchmark experiments were performed at the Critical Mass Laboratory at Rockwell International's Rocky Flats Plant, Golden, Colorado, for the US Nuclear Regulatory Commission. They provide accurate criticality data for low-enriched damp uranium oxide (U 3 O 8 ) systems. The core studied consisted of 152 mm cubical aluminum cans containing an average of 15,129 g of low-enriched (4.46% 235 U) uranium oxide compacted to a density of 4.68 g/cm 3 and with an H/U atomic ratio of 0.77. One hundred twenty five (125) of these cans were arranged in an approx. 770 mm cubical array. Since the oxide alone cannot be made critical in an array of this size, an enriched (approx. 93% 235 U) metal or solution driver was used to achieve criticality. Measurements are reported for systems having the least practical reflection and for systems reflected by approx. 254-mm-thick concrete or plastic. Under the three reflection conditions, the mass of the uranium metal driver ranged from 29.87 kg to 33.54 kg for an oxide core of 1864.6 kg. For an oxide core of 1824.9 kg, the weight of the high concentration (351.2 kg U/m 3 ) solution driver varied from 14.07 kg to 16.14 kg, and the weight of the low concentration (86.4 kg U/m 3 ) solution driver from 12.4 kg to 14.0 kg
Merger of Nuclear Data with Criticality Safety Calculations
Energy Technology Data Exchange (ETDEWEB)
Derrien, H.; Larson, N.M.; Leal, L.C.
1999-09-20
In this paper we report on current activities related to the merger of differential/integral data (especially in the resolved-resonance region) with nuclear criticality safety computations. Techniques are outlined for closer coupling of many processes � measurement, data reduction, differential-data analysis, integral-data analysis, generating multigroup cross sections, data-testing, criticality computations � which in the past have been treated independently.
Feasibility study on heterogeneous method in criticality calculations
International Nuclear Information System (INIS)
Prati, A.
1977-01-01
The criticality of finite heterogeneous assemblies is analysed by the heterogeneous methods employing the Eigen-function analysis. The moderation is treated by the Fermi age theory. The system is analysed in two dimensional rectangular coordinates. The criticality and the fluxes are determined for systems with small and large number of fuel rods. The convergence and the residual error in the modal analysis are discussed. (author)
Merger of Nuclear Data with Criticality Safety Calculations
International Nuclear Information System (INIS)
Derrien, H.; Larson, N.M.; Leal, L.C.
1999-01-01
In this paper we report on current activities related to the merger of differential/integral data (especially in the resolved-resonance region) with nuclear criticality safety computations. Techniques are outlined for closer coupling of many processes measurement, data reduction, differential-data analysis, integral-data analysis, generating multigroup cross sections, data-testing, criticality computations which in the past have been treated independently
International Nuclear Information System (INIS)
Lopez Aldama, D.; Rodriguez Gual, R.
1998-01-01
Presently work intends to validate the models and programs used in the Nuclear Technology Center for calculating the critical position of control rods by means of the analysis of the measurements performed at the critical facility IPEN/MB-01. The lattice calculations were carried out with the WIMS/D4 code and for the global calculations the diffusion code SNAP-3D was used
International Nuclear Information System (INIS)
Nguyen Kien Cuong; Vo Doan Hai Dang; Luong Ba Vien; Le Vinh Vinh; Huynh Ton Nghiem; Nguyen Minh Tuan; Nguyen Manh Hung; Pham Quang Huy; Tran Quoc Duong; Tran Tri Vien
2015-01-01
Basing on the idea in ??using fuel of nuclear power plants such as PWR (AP-1000) and VVER-1000 with light water as moderation, design calculation of critical assembly was performed to confirm the possibility of using these fuels. Designed critical assembly has simple structure consisting of low enriched fuel from 1.6% to 5% U-235; water has functions as cooling, biological protection and control. Critical assembly is operated at nominal power 100 W with fuel pitch about 2.0 cm. Applications of the critical assembly are quite abundant in basic research, education and training with low investment cost compare with research reactor and easy in operation. So critical assembly can be used for university or training centre for nuclear engineering training. Main objectives of the project are: design calculation in neutronics, thermal hydraulics and safety analysis for critical configuration benchmarks using low enriched fuel; design in mechanical and auxiliary systems for critical assembly; determine technical specifications and estimate construction, installation cost of critical assembly. The process of design, fabrication, installation and construction of critical assembly will be considered with different implementation phases and localization capabilities in installation of critical assembly is highly feasibility. Cost estimation of construction and installation of critical assembly was implemented and showed that investment cost for critical assembly is much lower than research reactor and most of components, systems of critical assembly can be localized with current technique quality of the country. (author)
International Nuclear Information System (INIS)
Mark Dennis Usang; Mohd Hairie Rabir; Mohd Amin Sharifuldin Salleh; Mohamad Puad Abu
2012-01-01
MPI parallelism are implemented on a SUN Workstation for running MCNPX and on the High Performance Computing Facility (HPC) for running MCNP5. 23 input less obtained from MCNP Criticality Validation Suite are utilized for the purpose of evaluating the amount of speed up achievable by using the parallel capabilities of MPI. More importantly, we will study the economics of using more processors and the type of problem where the performance gain are obvious. This is important to enable better practices of resource sharing especially for the HPC facilities processing time. Future endeavours in this direction might even reveal clues for best MCNP5/ MCNPX coding practices for optimum performance of MPI parallelisms. (author)
Swedish analysis of NEA/CSNI benchmark problems for criticality codes
International Nuclear Information System (INIS)
Mennerdahl, D.
1984-08-01
The Monte Carlo methods used by the members of the working group are adequate for calculations on large arrays. The differences in the results from different codes are probably caused by the differences in cross sections. The previous difficulties in obtaining good results for bare arrays of UNH-solution are, at least to some extent, explained by incomplete information. The neutron reflection from walls, ceiling and floor has earlier been neglected. The inclusion of these in the input to the Monte Carlo codes appears to lead to adequate results. The basis for the IAEA rules of calculating allowable numbers of fissile packages mixed with other packages (fissile or not) during transport, does not seem justified. This has been demonstrated for theoretical package designs. It has not been confirmed by the other members of the working group and no conclusion was drawn by the group. It is very likely that a mix of real packages can be found that supports the mentioned theoretical demonstration. (author)
International Nuclear Information System (INIS)
Vasil'ev, A.P.; Krepkij, A.S.; Lukin, A.V.; Mikhal'kova, A.G.; Orlov, A.I.; Perezhogin, V.D.; Samojlova, L.Yu.; Sokolov, Yu.A.; Terekhin, V.A.; Chernukhin, Yu.I.
1991-01-01
Critical mass experiments were performed using assemblies which simulated one-dimensional lattice consisting of shielding containers with metal fissile materials. Calculations of the criticality of the above assemblies were carried out using the KLAN program with the BAS neutron constants. Errors in the calculations of the criticality for one-, two-, and three-dimensional lattices are estimated. 3 refs.; 1 tab
International Nuclear Information System (INIS)
Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.
1991-01-01
Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems
Calculating the critical temperature for Coleman-Weinberg GUTS
International Nuclear Information System (INIS)
Easther, R.; Moreau, W.
1992-01-01
We study the finite-temperature effective potential of the Higgs scalar in GUTs with Coleman-Weinberg symmetry breaking. The critical temperature is derived without employing a high-temperature approximation to the effective potential, and the limitations of such approximations are discussed. (author)
Reactor group constants and benchmark test
Energy Technology Data Exchange (ETDEWEB)
Takano, Hideki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
2001-08-01
The evaluated nuclear data files such as JENDL, ENDF/B-VI and JEF-2 are validated by analyzing critical mock-up experiments for various type reactors and assessing applicability for nuclear characteristics such as criticality, reaction rates, reactivities, etc. This is called Benchmark Testing. In the nuclear calculations, the diffusion and transport codes use the group constant library which is generated by processing the nuclear data files. In this paper, the calculation methods of the reactor group constants and benchmark test are described. Finally, a new group constants scheme is proposed. (author)
Criticality calculation of the nuclear material warehouse of the ININ
International Nuclear Information System (INIS)
Garcia, T.; Angeles, A.; Flores C, J.
2013-10-01
In this work the conditions of nuclear safety were determined as much in normal conditions as in the accident event of the nuclear fuel warehouse of the reactor TRIGA Mark III of the Instituto Nacional de Investigaciones Nucleares (ININ). The warehouse contains standard fuel elements Leu - 8.5/20, a control rod with follower of standard fuel type Leu - 8.5/20, fuel elements Leu - 30/20, and the reactor fuel Sur-100. To check the subcritical state of the warehouse the effective multiplication factor (keff) was calculated. The keff calculation was carried out with the code MCNPX. (Author)
International Nuclear Information System (INIS)
Marusich, R.M. Westinghouse Hanford
1996-01-01
The purpose of this calculation note is to provide the basis for criticality consequences for the Tank Farm Safety Analysis Report (FSAR). Criticality scenario is developed and details and description of the analysis methods are provided
CRISTAL V1: Criticality package for burn up credit calculations
International Nuclear Information System (INIS)
Gomit, Jean-Michel; Cousinou, Patrick; Gantenbein, Francoise; Diop, Cheikh; Fernandez de Grado, Guy; Mijuin, Dominique; Grouiller, Jean-Paul; Marc, Andre; Toubon, Herve
2003-01-01
The first version of the CRISTAL package, created and validated as part of a joint project between IRSN, COGEMA and CEA, was delivered to users in November 1999. This fruitful cooperation between IRSN, COGEMA and CEA has been pursued until 2003 with the development and the validation of the package CRISTAL V1, whose main objectives are to improve the criticality safety studies including the Burn up Credit effect. (author)
International Nuclear Information System (INIS)
Panka, I.; Kereszturi, A.
2014-01-01
The assessment of the uncertainties of COBRA-IIIC thermal-hydraulic analyses of rod bundles is performed for a 5-by-5 bundle representing a PWR fuel assembly. In the first part of the paper the modeling uncertainties are evaluated in the term of the uncertainty of the turbulent mixing factor using the OECD NEA/NRC PSBT benchmark data. After that the uncertainties of the COBRA calculations are discussed performing Monte-Carlo type statistical analyses taking into account the modeling uncertainties and other uncertainties prescribed in the OECD NEA UAM benchmark specification. Both steady-state and transient cases are investigated. The target quantities are the uncertainties of the void distribution, the moderator density, the moderator temperature and the DNBR. We will see that - beyond the uncertainties of the geometry and the boundary conditions - it is very important to take into account the modeling uncertainties in case of bundle or sub-channel thermo-hydraulic calculations.
Use of deterministic methods in survey calculations for criticality problems
International Nuclear Information System (INIS)
Hutton, J.L.; Phenix, J.; Course, A.F.
1991-01-01
A code package using deterministic methods for solving the Boltzmann Transport equation is the WIMS suite. This has been very successful for a range of situations. In particular it has been used with great success to analyse trends in reactivity with a range of changes in state. The WIMS suite of codes have a range of methods and are very flexible in the way they can be combined. A wide variety of situations can be modelled ranging through all the current Thermal Reactor variants to storage systems and items of chemical plant. These methods have recently been enhanced by the introduction of the CACTUS method. This is based on a characteristics technique for solving the Transport equation and has the advantage that complex geometrical situations can be treated. In this paper the basis of the method is outlined and examples of its use are illustrated. In parallel with these developments the validation for out of pile situations has been extended to include experiments with relevance to criticality situations. The paper will summarise this evidence and show how these results point to a partial re-adoption of deterministic methods for some areas of criticality. The paper also presents results to illustrate the use of WIMS in criticality situations and in particular show how it can complement codes such as MONK when used for surveying the reactivity effect due to changes in geometry or materials. (Author)
Monte Carlo criticality calculations accelerated by a growing neutron population
International Nuclear Information System (INIS)
Dufek, Jan; Tuttelberg, Kaur
2016-01-01
Highlights: • Efficiency is significantly improved when population size grows over cycles. • The bias in the fission source is balanced to other errors in the source. • The bias in the fission source decays over the cycle as the population grows. - Abstract: We propose a fission source convergence acceleration method for Monte Carlo criticality simulation. As the efficiency of Monte Carlo criticality simulations is sensitive to the selected neutron population size, the method attempts to achieve the acceleration via on-the-fly control of the neutron population size. The neutron population size is gradually increased over successive criticality cycles so that the fission source bias amounts to a specific fraction of the total error in the cumulative fission source. An optimal setting then gives a reasonably small neutron population size, allowing for an efficient source iteration; at the same time the neutron population size is chosen large enough to ensure a sufficiently small source bias, such that does not limit accuracy of the simulation.
International Nuclear Information System (INIS)
Hadek, J.; Kral, P.; Macek, J.
2001-01-01
The paper gives a brief survey of the 6 th three-dimensional AER dynamic benchmark calculation results received with the codes DYN3D and RELAPS-3D at NRI Rez. This benchmark was defined at the 10 th AER Symposium. Its initiating event is a double ended break in the steam line of steam generator No. I in a WWER-440/213 plant at the end of the first fuel cycle and in hot full power conditions. Stationary and burnup calculations as well as tuning of initial state before the transient were performed with the code DYN3D. Transient calculations were made with the system code RELAPS-3D.The KASSETA library was used for the generation of reactor core neutronic parameters. The detailed six loops model of NPP Dukovany was adopted for the 6 th AER dynamic benchmark purposes. The RELAPS-3D full core neutronic model was connected with seven coolant channels thermal-hydraulic model of the core (Authors)
Criticality safety calculations for the nuclear waste disposal canisters
International Nuclear Information System (INIS)
Anttila, M.
1996-12-01
The criticality safety of the copper/iron canisters developed for the final disposal of the Finnish spent fuel has been studied with the MCNP4A code based on the Monte Carlo technique and with the fuel assembly burnup programs CASMO-HEX and CASMO-4. Two rather similar types of spent fuel disposal canisters have been studied. One canister type has been designed for hexagonal VVER-440 fuel assemblies used at the Loviisa nuclear power plant (IVO canister) and the other one for square BWR fuel bundles used at the Olkiluoto nuclear power plant (TVO canister). (10 refs.)
International Nuclear Information System (INIS)
J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori
2007-01-01
Since ICNC 2003, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) has continued to expand its efforts and broaden its scope. Criticality-alarm/shielding type benchmarks and fundamental physics measurements that are relevant to criticality safety applications are not only included in the scope of the project, but benchmark data are also included in the latest version of the handbook. A considerable number of improvements have been made to the searchable database, DICE and the criticality-alarm/shielding benchmarks and fundamental physics measurements have been included in the database. There were 12 countries participating on the ICSBEP in 2003. That number has increased to 18 with recent contributions of data and/or resources from Brazil, Czech Republic, Poland, India, Canada, and China. South Africa, Germany, Argentina, and Australia have been invited to participate. Since ICNC 2003, the contents of the ''International Handbook of Evaluated Criticality Safety Benchmark Experiments'' have increased from 350 evaluations (28,000 pages) containing benchmark specifications for 3070 critical or subcritical configurations to 442 evaluations (over 38,000 pages) containing benchmark specifications for 3957 critical or subcritical configurations, 23 criticality-alarm-placement/shielding configurations with multiple dose points for each, and 20 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications in the 2006 Edition of the ICSBEP Handbook. Approximately 30 new evaluations and 250 additional configurations are expected to be added to the 2007 Edition of the Handbook. Since ICNC 2003, a reactor physics counterpart to the ICSBEP, The International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. Beginning in 1999, the IRPhEP was conducted as a pilot activity by the by the Organization of Economic Cooperation and Development (OECD) Nuclear Energy Agency
Energy Technology Data Exchange (ETDEWEB)
J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori
2007-05-01
Since ICNC 2003, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) has continued to expand its efforts and broaden its scope. Criticality-alarm / shielding type benchmarks and fundamental physics measurements that are relevant to criticality safety applications are not only included in the scope of the project, but benchmark data are also included in the latest version of the handbook. A considerable number of improvements have been made to the searchable database, DICE and the criticality-alarm / shielding benchmarks and fundamental physics measurements have been included in the database. There were 12 countries participating on the ICSBEP in 2003. That number has increased to 18 with recent contributions of data and/or resources from Brazil, Czech Republic, Poland, India, Canada, and China. South Africa, Germany, Argentina, and Australia have been invited to participate. Since ICNC 2003, the contents of the “International Handbook of Evaluated Criticality Safety Benchmark Experiments” have increased from 350 evaluations (28,000 pages) containing benchmark specifications for 3070 critical or subcritical configurations to 442 evaluations (over 38,000 pages) containing benchmark specifications for 3957 critical or subcritical configurations, 23 criticality-alarm-placement / shielding configurations with multiple dose points for each, and 20 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications in the 2006 Edition of the ICSBEP Handbook. Approximately 30 new evaluations and 250 additional configurations are expected to be added to the 2007 Edition of the Handbook. Since ICNC 2003, a reactor physics counterpart to the ICSBEP, The International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. Beginning in 1999, the IRPhEP was conducted as a pilot activity by the by the Organization of Economic Cooperation and Development (OECD) Nuclear Energy
Sylvetsky, Nitai; Kesharwani, Manoj K; Martin, Jan M L
2017-10-07
We have developed a new basis set family, denoted as aug-cc-pVnZ-F12 (or aVnZ-F12 for short), for explicitly correlated calculations. The sets included in this family were constructed by supplementing the corresponding cc-pVnZ-F12 sets with additional diffuse functions on the higher angular momenta (i.e., additional d-h functions on non-hydrogen atoms and p-g on hydrogen atoms), optimized for the MP2-F12 energy of the relevant atomic anions. The new basis sets have been benchmarked against electron affinities of the first- and second-row atoms, the W4-17 dataset of total atomization energies, the S66 dataset of noncovalent interactions, the Benchmark Energy and Geometry Data Base water cluster subset, and the WATER23 subset of the GMTKN24 and GMTKN30 benchmark suites. The aVnZ-F12 basis sets displayed excellent performance, not just for electron affinities but also for noncovalent interaction energies of neutral and anionic species. Appropriate CABSs (complementary auxiliary basis sets) were explored for the S66 noncovalent interaction benchmark: between similar-sized basis sets, CABSs were found to be more transferable than generally assumed.
A thermo-mechanical benchmark calculation of an hexagonal can in the BTI accident with ABAQUS code
International Nuclear Information System (INIS)
Zucchini, A.
1988-07-01
The thermo-mechanical behaviour of an hexagonal can in a benchmark problem (simulating the conditions of a BTI accident in a fuel assembly) is examined by means of the ABAQUS code: the effects of the geometric nonlinearity are shown and the results are compared with those of a previous analysis performed with the INCA code. (author)
Quantum mechanical cluster calculations of critical scintillation processes
International Nuclear Information System (INIS)
Derenzo, Stephen E.; Klintenberg, Mattias K.; Weber, Marvin J.
2000-01-01
This paper describes the use of commercial quantum chemistry codes to simulate several critical scintillation processes. The crystal is modeled as a cluster of typically 50 atoms embedded in an array of typically 5,000 point charges designed to reproduce the electrostatic field of the infinite crystal. The Schrodinger equation is solved for the ground, ionized, and excited states of the system to determine the energy and electron wave function. Computational methods for the following critical processes are described: (1) the formation and diffusion of relaxed holes, (2) the formation of excitons, (3) the trapping of electrons and holes by activator atoms, (4) the excitation of activator atoms, and (5) thermal quenching. Examples include hole diffusion in CsI, the exciton in CsI, the excited state of CsI:Tl, the energy barrier for the diffusion of relaxed holes in CaF2 and PbF2, and prompt hole trapping by activator atoms in CaF2:Eu and CdS:Te leading to an ultra-fast (<50ps) scintillation rise time.
Validation of KENO-based criticality calculations at Rocky Flats
International Nuclear Information System (INIS)
Felsher, P.D.; McKamy, J.N.; Monahan, S.P.
1992-01-01
In the absence of experimental data, it is necessary to rely on computer-based computational methods in evaluating the criticality condition of a nuclear system. The validity of the computer codes is established in a two-part procedure as outlined in ANSI/ANS 8.1. The first step, usually the responsibility of the code developer, involves verification that the algorithmic structure of the code is performing the intended mathematical operations correctly. The second step involves an assessment of the code's ability to realistically portray the governing physical processes in question. This is accomplished by determining the code's bias, or systematic error, through a comparison of computational results to accepted values obtained experimentally. In this paper, the authors discuss the validation process for KENO and the Hansen-Roach cross sections in use at EG and G Rocky Flats. The validation process at Rocky Flats consists of both global and local techniques. The global validation resulted in a maximum k eff limit of 0.95 for the limiting-accident scanarios of a criticality evaluation
Critical evaluation of German regulatory specifications for calculating radiological exposure
Energy Technology Data Exchange (ETDEWEB)
Koenig, Claudia; Walther, Clemens [Hannover Univ. (Germany). Inst. of Radioecology; Smeddinck, Ulrich [Technische Univ. Braunschweig (Germany). Inst. of Law
2015-07-01
The assessment of radiological exposure of the public is an issue at the interface between scientific findings, juridical standard setting and political decision. The present work revisits the German regulatory specifications for calculating radiological exposure, like the already existing calculation model General Administrative Provision (AVV) for planning and monitoring nuclear facilities. We address the calculation models for the recent risk assessment regarding the final disposal of radioactive waste in Germany. To do so, a two-pronged approach is pursued. One part deals with radiological examinations of the groundwater-soil-transfer path of radionuclides into the biosphere. Processes at the so-called geosphere-biosphere-interface are examined, especially migration of I-129 in the unsaturated zone. This is necessary, since the German General Administrative Provision does not consider radionuclide transport via groundwater from an underground disposal facility yet. Especially data with regard to processes in the vadose zone are scarce. Therefore, using I-125 as a tracer, immobilization and mobilization of iodine is investigated in two reference soils from the German Federal Environment Agency. The second part of this study examines how scientific findings but also measures and activities of stakeholders and concerned parties influence juridical standard setting, which is necessary for risk management. Risk assessment, which is a scientific task, includes identification and investigation of relevant sources of radiation, possible pathways to humans, and maximum extent and duration of exposure based on dose-response functions. Risk characterization identifies probability and severity of health effects. These findings have to be communicated to authorities, who have to deal with the risk management. Risk management includes, for instance, taking into account acceptability of the risk, actions to reduce, mitigate, substitute or monitor the hazard, the setting of
Quality plan for criticality safety calculations at Rocky Flats
International Nuclear Information System (INIS)
Pecora, D.
1978-01-01
The text of the plan is given, and some of the guidelines followed in writing it are discussed to aid others who may be faced with the same task. The plan is divided into four sections. The Introduction describes the general functions and purpose of the calculational program. The second section, Activities and Responsibilities, lists specific tasks and their purposes and assigns responsibility for performance. The third section references relevant documentation (e.g., ANSI standards), and the final section describes quality plans for specific functions
Research on GPU acceleration for Monte Carlo criticality calculation
International Nuclear Information System (INIS)
Xu, Q.; Yu, G.; Wang, K.
2013-01-01
The Monte Carlo (MC) neutron transport method can be naturally parallelized by multi-core architectures due to the dependency between particles during the simulation. The GPU+CPU heterogeneous parallel mode has become an increasingly popular way of parallelism in the field of scientific supercomputing. Thus, this work focuses on the GPU acceleration method for the Monte Carlo criticality simulation, as well as the computational efficiency that GPUs can bring. The 'neutron transport step' is introduced to increase the GPU thread occupancy. In order to test the sensitivity of the MC code's complexity, a 1D one-group code and a 3D multi-group general purpose code are respectively transplanted to GPUs, and the acceleration effects are compared. The result of numerical experiments shows considerable acceleration effect of the 'neutron transport step' strategy. However, the performance comparison between the 1D code and the 3D code indicates the poor scalability of MC codes on GPUs. (authors)
Multi-level iteration optimization for diffusive critical calculation
International Nuclear Information System (INIS)
Li Yunzhao; Wu Hongchun; Cao Liangzhi; Zheng Youqi
2013-01-01
In nuclear reactor core neutron diffusion calculation, there are usually at least three levels of iterations, namely the fission source iteration, the multi-group scattering source iteration and the within-group iteration. Unnecessary calculations occur if the inner iterations are converged extremely tight. But the convergence of the outer iteration may be affected if the inner ones are converged insufficiently tight. Thus, a common scheme suit for most of the problems was proposed in this work to automatically find the optimized settings. The basic idea is to optimize the relative error tolerance of the inner iteration based on the corresponding convergence rate of the outer iteration. Numerical results of a typical thermal neutron reactor core problem and a fast neutron reactor core problem demonstrate the effectiveness of this algorithm in the variational nodal method code NODAL with the Gauss-Seidel left preconditioned multi-group GMRES algorithm. The multi-level iteration optimization scheme reduces the number of multi-group and within-group iterations respectively by a factor of about 1-2 and 5-21. (authors)
International Nuclear Information System (INIS)
Yoshizawa, Nobuaki; Meigo, Shin-ichiro
2001-01-01
The neutron and proton cross sections of 56 Fe were evaluated up to 3 GeV. JENDL High Energy File of 56 Fe were developed for use in transport calculation. For neutrons, the high-energy data are merged with JENDL3.3-file. Integral benchmark calculations for thick target neutron yields (TTY) for 113 MeV and 256 MeV proton bombardment of Fe targets were performed using the evaluated libraries. Calculated TTY neutron spectra were compared with experimental data. For 113 MeV, calculated TTY at 7.5 degree underestimated in the emitted neutron energy range above 10 MeV. For 256 MeV, calculated TTY well agree with experimental data except below 10 MeV. (author)
Critical Values for Lawshe's Content Validity Ratio: Revisiting the Original Methods of Calculation
Ayre, Colin; Scally, Andrew John
2014-01-01
The content validity ratio originally proposed by Lawshe is widely used to quantify content validity and yet methods used to calculate the original critical values were never reported. Methods for original calculation of critical values are suggested along with tables of exact binomial probabilities.
International Nuclear Information System (INIS)
Busch, R.D.
1995-01-01
Dr. Robert Busch of the Department of Chemical and Nuclear Engineering was the principal investigator on this project with technical direction provided by the staff in the Nuclear Criticality Safety Group at Los Alamos. During the period of the contract, he had a number of graduate and undergraduate students working on subtasks. The objective of this work was to develop information on uranium systems to enhance benchmarks for use in the verification of criticality safety computer models. During the first year of this project, most of the work was focused on setting up the SUN SPARC-1 Workstation and acquiring the literature which described the critical experiments. By august 1990, the Workstation was operational with the current version of TWODANT loaded on the system. MCNP, version 4 tape was made available from Los Alamos late in 1990. Various documents were acquired which provide the initial descriptions of the critical experiments under consideration as benchmarks. The next four years were spent working on various benchmark projects. A number of publications and presentations were made on this material. These are briefly discussed in this report
International Nuclear Information System (INIS)
Shao, Kan; Gift, Jeffrey S.; Setzer, R. Woodrow
2013-01-01
Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more
Energy Technology Data Exchange (ETDEWEB)
Shao, Kan, E-mail: Shao.Kan@epa.gov [ORISE Postdoctoral Fellow, National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Gift, Jeffrey S. [National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Setzer, R. Woodrow [National Center for Computational Toxicology, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States)
2013-11-01
Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more
International Nuclear Information System (INIS)
Goluoglu, S.; Hopper, C.M.
2004-01-01
Through the use of Oak Ridge National Laboratory's recently developed and applied sensitivity and uncertainty computational analysis techniques, this paper presents the relevance and importance of available and needed integral benchmarks and differential data evaluations impacting potential MOX production throughput determinations relative to low-moderated MOX fuel blending operations. The relevance and importance in the availability of or need for critical experiment benchmarks and data evaluations are presented in terms of computational biases as influenced by computational and experimental sensitivities and uncertainties relative to selected MOX production powder blending processes. Recent developments for estimating the safe margins of subcriticality for assuring nuclear criticality safety for process approval are presented. In addition, the impact of the safe margins (due to computational biases and uncertainties) on potential MOX production throughput will also be presented. (author)
Criticality Calculations for a Typical Nuclear Fuel Fabrication Plant with Low Enriched Uranium
International Nuclear Information System (INIS)
Elsayed, Hade; Nagy, Mohamed; Agamy, Said; Shaat, Mohmaed
2013-01-01
The operations with the fissile materials such as U 235 introduce the risk of a criticality accident that may be lethal to nearby personnel and can lead the facility to shutdown. Therefore, the prevention of a nuclear criticality accident should play a major role in the design of a nuclear facility. The objectives of criticality safety are to prevent a self-sustained nuclear chain reaction and to minimize the consequences. Sixty criticality accidents were occurred in the world. These are accidents divided into two categories, 22 accidents occurred in process facilities and 38 accidents occurred during critical experiments or operations with research reactor. About 21 criticality accidents including Japan Nuclear Fuel Conversion Co. (JCO) accident took place with fuel solution or slurry and only one accident occurred with metal fuel. In this study the nuclear criticality calculations have been performed for a typical nuclear fuel fabrication plant producing nuclear fuel elements for nuclear research reactors with low enriched uranium up to 20%. The calculations were performed for both normal and abnormal operation conditions. The effective multiplication factor (k eff ) during the nuclear fuel fabrication process (Uranium hexafluoride - Ammonium Diuranate conversion process) was determined. Several accident scenarios were postulated and the criticalities of these accidents were evaluated. The computer code MCNP-4B which based on Monte Carlo method was used to calculate neutron multiplication factor. The criticality calculations Monte Carlo method was used to calculate neutron multiplication factor. The criticality calculations were performed for the cases of, change of moderator to fuel ratio, solution density and concentration of the solute in order to prevent or mitigate criticality accidents during the nuclear fuel fabrication process. The calculation results are analyzed and discussed
International Nuclear Information System (INIS)
El Ouahdani, S.; Boukhal, H.; Erradi, L.; Chakir, E.; El Bardouni, T.; Hajjaji, O.; Boulaich, Y.; Benaalilou, K.; Kaddour, M.
2016-01-01
Highlights: • A set of KRITZ-2 experiments with UO 2 and MOX LWR lattices, at room and elevated temperatures, have been analysed using the MCNP6.1 code with the libraries: JENDL-4 and ENDF/B-VII.1. • The detailed comparisons of the calculations and measurements demonstrate a good agreement between calculations and measurements. • To investigate better the influence of cross sections differences on the reactivity temperature coefficient, we break it down into its components using a pin cell model. - Abstract: A set of KRITZ-2 experiments light water moderated lattices with uranium oxide and mixed-oxide fuel rods, at room and elevated temperatures, performed in the early 1970’s have been assessed. Using the MCNP6.1 code with the most recent cross section libraries: JENDL-4 and ENDF/B-VII.1, the critical experiments KRITZ: 2-1, KRITZ: 2-13, and KRITZ: 2-19 achieved in the Sweden reactor KRITZ were analyzed. We have used the ENDF/B-VII.1 data provided with the MCNP6.1.1 version in ACE format and the Makxsf utility to handle the data in the specific temperatures not available in the MCNP6.1.1 original data. The JENDL-4 evaluations were processed using NJOY99 (update 364) to the temperatures of interest. The detailed comparisons of the calculated and measured (Benchmark, 2005) effective multiplication factors and pin power distributions for UO2 and MOX fuelled cores presented in this work demonstrate a good agreement between calculation and measurements. The maximum deviation of the calculation from the experimental data for k eff , is 0.58% (absolute value) obtained for the KRITZ 2:1 at 248.5 °C using ENDF/B-VII.1 data. To investigate better the influence of cross sections differences on the reactivity and temperature coefficient, we break down the infinite multiplication factor into its components using a pin cell model. Using this simple model we evaluated the temperature effect on the infinite multiplication factor and the effect on its components. We have
International Nuclear Information System (INIS)
Marck, Steven C. van der
2006-01-01
The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2 O, H 2 O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257
International Nuclear Information System (INIS)
Bencik, M.; Hadek, J.
2011-01-01
The paper gives a brief survey of the seventh three-dimensional AER dynamic benchmark calculation results received with the codes DYN3D and RELAP5-3D at Nuclear Research Institute Rez. This benchmark was defined at the twentieth AER Symposium in Hanassari (Finland). It is focused on investigation of transient behaviour in a WWER-440 nuclear power plant. Its initiating event is opening of the main isolation valve and re-connection of the loop with its main circulation pump in operation. The WWER-440 plant is at the end of the first fuel cycle and in hot full power conditions. Stationary and burnup calculations were performed with the code DYN3D. Transient calculation was made with the system code RELAP5-3D. The two-group homogenized cross sections library HELGD05 created by HELIOS code was used for the generation of reactor core neutronic parameters. The detailed six loops model of NPP Dukovany was adopted for the seventh AER dynamic benchmark purposes. The RELAP5-3D full core neutronic model was coupled with 49 core thermal-hydraulic channels and 8 reflector channels connected with the three-dimensional model of the reactor vessel. The detailed nodalization of reactor downcomer, lower and upper plenum was used. Mixing in lower and upper plenum was simulated. The first part of paper contains a brief characteristic of RELAP5-3D system code and a short description of NPP input deck and reactor core model. The second part shows the time dependencies of important global and local parameters. (Authors)
International Nuclear Information System (INIS)
Neuber, Jens Christian; Tippl, Wolfgang; Hemptinne, Gwendoline de; Maes, Philippe; Ranta-aho, Anssu; Peneliau, Yannick; Jutier, Ludyvine; Tardy, Marcel; Reiche, Ingo; Kroeger, Helge; Nakata, Tetsuo; Armishaw, Malcom; Miller, Thomas M.
2015-01-01
a discussion of the spread of the eff k results. Following this, the evaluation of the end effect is accomplished starting with a discussion of the spread of the end effect results following from the eff k results. Then the functional dependence of the end effect on the control rod insertion depth is described by introducing and deriving model functions. After that the fission density results are evaluated by introducing and deriving fission density model functions describing the axial fission probability density for the different control rod insertion depths. Using these fission density model functions the fission probability content of the top end region of the active zone of the fuel assemblies is estimated. Predictions of the qualitative behaviour of the neutron multiplication factor and the end effect as a function of the control rod insertion depth were already made in the Phase II-C report and verified in this report which demonstrates the practical relevance of the relations established in the Phase II-C report. In addition, it turns out that parameters describing the average burn-up transformation characteristics of these relations play important roles in comparisons of the end effect model functions derived for the two Phase II-E axial burn-up profiles. Thus, the Phase II-E benchmark exercise complements the Phase II-C and Phase II-D benchmark exercises. The applicability of the knowledge gained from the results of all these three exercises to burn-up credit criticality safety design calculations is demonstrated
International Nuclear Information System (INIS)
Canali, U.; Gonano, G.; Nicks, R.
1978-01-01
Within the framework of the coordinated programme of sensitivity analysis studies, the reactor shielding benchmark calculation concerning the shield of a typical Pressurized Water Reactor, as proposed by I.K.E. (Stuttgart) and K.W.U. (Erlangen) has been performed. The direct and adjoint fluxes were calculated using ANISN, the cross-section sensitivity using SWANLAKE. The cross-section library used was EL4, 100 neutron + 19 gamma groups. The following quantities were of interest: neutron damage in the pressure vessel; dose rate outside the concrete shield. SWANLAKE was used to calculate the sensitivity of the above mentioned results to variations in the density of each nuclide present. The contributions of the different cross-section Legendre components are also given. Sensitivity profiles indicate the energy ranges in which a cross-section variation has a greater influence on the results. (author)
Bartlett, Philip L.; Stelbovics, Andris T.
2010-02-01
The propagating exterior complex scaling (PECS) method is extended to all four-body processes in electron impact on helium in an S-wave model. Total and energy-differential cross sections are presented with benchmark accuracy for double ionization, single ionization with excitation, and double excitation (to autoionizing states) for incident-electron energies from threshold to 500 eV. While the PECS three-body cross sections for this model given in the preceding article [Phys. Rev. A 81, 022715 (2010)] are in good agreement with other methods, there are considerable discrepancies for these four-body processes. With this model we demonstrate the suitability of the PECS method for the complete solution of the electron-helium system.
International Nuclear Information System (INIS)
Uddin, M.N.; Sarker, M.M.; Khan, M.J.H.; Islam, S.M.A.
2009-01-01
The aim of this paper is to present the validation of evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 through the analysis of the integral parameters of TRX and BAPL benchmark lattices of thermal reactors for neutronics analysis of TRIGA Mark-II Research Reactor at AERE, Bangladesh. In this process, the 69-group cross-section library for lattice code WIMS was generated using the basic evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 with the help of nuclear data processing code NJOY99.0. Integral measurements on the thermal reactor lattices TRX-1, TRX-2, BAPL-UO 2 -1, BAPL-UO 2 -2 and BAPL-UO 2 -3 served as standard benchmarks for testing nuclear data files and have also been selected for this analysis. The integral parameters of the said lattices were calculated using the lattice transport code WIMSD-5B based on the generated 69-group cross-section library. The calculated integral parameters were compared to the measured values as well as the results of Monte Carlo Code MCNP. It was found that in most cases, the values of integral parameters show a good agreement with the experiment and MCNP results. Besides, the group constants in WIMS format for the isotopes U-235 and U-238 between two data files have been compared using WIMS library utility code WILLIE and it was found that the group constants are identical with very insignificant difference. Therefore, this analysis reflects the validation of evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 through benchmarking the integral parameters of TRX and BAPL lattices and can also be essential to implement further neutronic analysis of TRIGA Mark-II research reactor at AERE, Dhaka, Bangladesh.
International Nuclear Information System (INIS)
Fischer, G.A.
2010-01-01
The PCA Benchmark is analyzed using RAPTOR-M3G, a parallel SN radiation transport code. A variety of mesh structures, angular quadrature sets, cross section treatments, and reactor dosimetry cross sections are presented. The results show that RAPTOR-M3G is generally suitable for PWR neutron dosimetry applications. (authors)
SPENT NUCLEAR FUEL NUMBER DENSITIES FOR MULTI-PURPOSE CANISTER CRITICALITY CALCULATIONS
International Nuclear Information System (INIS)
D. A. Thomas
1996-01-01
The purpose of this analysis is to calculate the number densities for spent nuclear fuel (SNF) to be used in criticality evaluations of the Multi-Purpose Canister (MPC) waste packages. The objective of this analysis is to provide material number density information which will be referenced by future MPC criticality design analyses, such as for those supporting the Conceptual Design Report
ZZ ECN-BUBEBO, ECN-Petten Burnup Benchmark Book, Inventories, Afterheat
International Nuclear Information System (INIS)
Kloosterman, Jan Leen
1999-01-01
Description of program or function: Contains experimental benchmarks which can be used for the validation of burnup code systems and accompanied data libraries. Although the benchmarks presented here are thoroughly described in literature, it is in many cases not straightforward to retrieve unambiguously the correct input data and corresponding results from the benchmark Descriptions. Furthermore, results which can easily be measured, are sometimes difficult to calculate because of conversions to be made. Therefore, emphasis has been put to clarify the input of the benchmarks and to present the benchmark results in such a way that they can easily be calculated and compared. For more thorough Descriptions of the benchmarks themselves, the literature referred to here should be consulted. This benchmark book is divided in 11 chapters/files containing the following in text and tabular form: chapter 1: Introduction; chapter 2: Burnup Credit Criticality Benchmark Phase 1-B; chapter 3: Yankee-Rowe Core V Fuel Inventory Study; chapter 4: H.B. Robinson Unit 2 Fuel Inventory Study; chapter 5: Turkey Point Unit 3 Fuel Inventory Study; chapter 6: Turkey Point Unit 3 Afterheat Power Study; chapter 7: Dickens Benchmark on Fission Product Energy Release of U-235; chapter 8: Dickens Benchmark on Fission Product Energy Release of Pu-239; chapter 9: Yarnell Benchmark on Decay Heat Measurements of U-233; chapter 10: Yarnell Benchmark on Decay Heat Measurements of U-235; chapter 11: Yarnell Benchmark on Decay Heat Measurements of Pu-239
International Nuclear Information System (INIS)
Khan, M.J.H.; Sarker, M.M.; Islam, S.M.A.
2013-01-01
Highlights: ► To validate the SRAC2006 code system for TRIGA neutronics calculations. ► TRX and BAPL lattices are treated as standard benchmarks for this purpose. ► To compare the calculated results with experiment as well as MCNP values in this study. ► The study demonstrates a good agreement with the experiment and the MCNP results. ► Thus, this analysis reflects the validation study of the SRAC2006 code system. - Abstract: The goal of this study is to present the validation study of the SRAC2006 code system based on evaluated nuclear data libraries ENDF/B-VII.0 and JENDL-3.3 for neutronics analysis of TRIGA Mark-II Research Reactor at AERE, Bangladesh. This study is achieved through the analysis of integral parameters of TRX and BAPL benchmark lattices of thermal reactors. In integral measurements, the thermal reactor lattices TRX-1, TRX-2, BAPL-UO 2 -1, BAPL-UO 2 -2 and BAPL-UO 2 -3 are treated as standard benchmarks for validating/testing the SRAC2006 code system as well as nuclear data libraries. The integral parameters of the said lattices are calculated using the collision probability transport code PIJ of the SRAC2006 code system at room temperature 20 °C based on the above libraries. The calculated integral parameters are compared to the measured values as well as the MCNP values based on the Chinese evaluated nuclear data library CENDL-3.0. It was found that in most cases, the values of integral parameters demonstrate a good agreement with the experiment and the MCNP results. In addition, the group constants in SRAC format for TRX and BAPL lattices in fast and thermal energy range respectively are compared between the above libraries and it was found that the group constants are identical with very insignificant difference. Therefore, this analysis reflects the validation study of the SRAC2006 code system based on evaluated nuclear data libraries JENDL-3.3 and ENDF/B-VII.0 and can also be essential to implement further neutronics calculations
A proposal for the calculation of the critical buckling of a PWR or undermoderated lattice
International Nuclear Information System (INIS)
Benoist, P.
1989-01-01
A method improving the calculation of the critical buckling of a PWR or undermorated lattice is proposed. This method takes into account the lattice heterogeneity with more detail than the existing ones; it lies on some approximations. The method requires a relatively small inplementational effort. It could be used in the calculation of fast reactors [fr
Calculation and mapping of critical loads in Europe: Status report 1993
International Nuclear Information System (INIS)
Downing, R.J.; Hettelingh, J.P.; De Smet, P.A.M.
1993-01-01
The work of the RIVM Coordination Center for Effects (CCE) and National Focal Centers (NFCs) for Mapping over the past two years is summarized. The primary task of the critical loads mapping program during this period was to compute and map critical loads of sulphur in Europe. Efforts were undertaken to enhance the scientific foundations and policy relevance of the critical load program, and to foster consensus among producers and users of this information by means of three workshops. The applied calculation methods are described, as well as the resulting critical loads maps, based upon the outcomes of the workshops. Chapter 2 contains the most recent maps (May 1993) of the critical load of acidity as well as the critical load of sulphur and critical sulphur deposition, which are derived from the critical load of acidity. The chapter also contains maps of the sulphur deposition in Europe in 1980 and 1990, and the resulting exceedances. In chapter 3 the methods and equations used to derive the maps of critical loads and exceedances of acidity and sulphur are described with emphasis on the advances in the calculation methods used since the first European critical loads maps were produced in 1991. In chapter 4 the methods to be used to compute and map critical loads in the future are presented. In chapter 5 an overview of the data inputs is given, and the methods of data handling performed by the CCE to produce the current European maps of critical loads. In chapter 6 the results of an uncertainty analysis is described, which was performed on the critical loads computation methodology to assess the reliability of the computation results and the importance of the various input variables. Chapter 7 provides some conclusions and recommendations resulting from the critical load mapping activities. In Appendix 1 the reports of the can be found, with additional maps of critical loads and background variables in Appendix 2. 15 figs., 11 tabs., 156 refs
Supplementary neutron flux calculations for the ORNL pool critical assembly pressure vessel facility
Energy Technology Data Exchange (ETDEWEB)
Maerker, R.E.; Maudlin, P.J.
1981-02-01
A three-dimensional Monte Carlo calculation was performed to estimate the neutron flux in the 8/7 configuration of the ORNL Pool Critical Assembly Pressure Vessel Facility. The calculational tool was the multigroup transport code MORSE operated in the adjoint mode. The MORSE flux results compared well with those using a previously adopted procedure for constructing a three-dimensional flux from one- and two-dimensional discrete ordinates calculations using the DOT-IV code. This study concluded that use of these discrete ordinates constructions in previous calculations is sufficiently accurate and does not account for the existing discrepancies between calculation and experiment.
Supplementary neutron flux calculations for the ORNL pool critical assembly pressure vessel facility
International Nuclear Information System (INIS)
Maerker, R.E.; Maudlin, P.J.
1981-02-01
A three-dimensional Monte Carlo calculation was performed to estimate the neutron flux in the 8/7 configuration of the ORNL Pool Critical Assembly Pressure Vessel Facility. The calculational tool was the multigroup transport code MORSE operated in the adjoint mode. The MORSE flux results compared well with those using a previously adopted procedure for constructing a three-dimensional flux from one- and two-dimensional discrete ordinates calculations using the DOT-IV code. This study concluded that use of these discrete ordinates constructions in previous calculations is sufficiently accurate and does not account for the existing discrepancies between calculation and experiment
Energy Technology Data Exchange (ETDEWEB)
Kaneko, Masashi [Japan Atomic Energy Agency, Nuclear Science and Engineering Center (Japan); Yasuhara, Hiroki; Miyashita, Sunao; Nakashima, Satoru, E-mail: snaka@hiroshima-u.ac.jp [Hiroshima University, Graduate School of Science (Japan)
2017-11-15
The present study applies all-electron relativistic DFT calculation with Douglas-Kroll-Hess (DKH) Hamiltonian to each ten sets of Ru and Os compounds. We perform the benchmark investigation of three density functionals (BP86, B3LYP and B2PLYP) using segmented all-electron relativistically contracted (SARC) basis set with the experimental Mössbauer isomer shifts for {sup 99}Ru and {sup 189}Os nuclides. Geometry optimizations at BP86 theory of level locate the structure in a local minimum. We calculate the contact density to the wavefunction obtained by a single point calculation. All functionals show the good linear correlation with experimental isomer shifts for both {sup 99}Ru and {sup 189}Os. Especially, B3LYP functional gives a stronger correlation compared to BP86 and B2PLYP functionals. The comparison of contact density between SARC and well-tempered basis set (WTBS) indicated that the numerical convergence of contact density cannot be obtained, but the reproducibility is less sensitive to the choice of basis set. We also estimate the values of ΔR/R, which is an important nuclear constant, for {sup 99}Ru and {sup 189}Os nuclides by using the benchmark results. The sign of the calculated ΔR/R values is consistent with the predicted data for {sup 99}Ru and {sup 189}Os. We obtain computationally the ΔR/R values of {sup 99}Ru and {sup 189}Os (36.2 keV) as 2.35×10{sup −4} and −0.20×10{sup −4}, respectively, at B3LYP level for SARC basis set.
International Nuclear Information System (INIS)
Daavittila, Antti; Haemaelaeinen, Anitta; Kyrki-Rajamaeki, Riitta
2003-01-01
All of the three exercises of the Organization for Economic Cooperation and Development/Nuclear Regulatory Commission pressurized water reactor main steam line break (PWR MSLB) benchmark were calculated at VTT, the Technical Research Centre of Finland. For the first exercise, the plant simulation with point-kinetic neutronics, the thermal-hydraulics code SMABRE was used. The second exercise was calculated with the three-dimensional reactor dynamics code TRAB-3D, and the third exercise with the combination TRAB-3D/SMABRE. VTT has over ten years' experience of coupling neutronic and thermal-hydraulic codes, but this benchmark was the first time these two codes, both developed at VTT, were coupled together. The coupled code system is fast and efficient; the total computation time of the 100-s transient in the third exercise was 16 min on a modern UNIX workstation. The results of all the exercises are similar to those of the other participants. In order to demonstrate the effect of secondary circuit modeling on the results, three different cases were calculated. In case 1 there is no phase separation in the steam lines and no flow reversal in the aspirator. In case 2 the flow reversal in the aspirator is allowed, but there is no phase separation in the steam lines. Finally, in case 3 the drift-flux model is used for the phase separation in the steam lines, but the aspirator flow reversal is not allowed. With these two modeling variations, it is possible to cover a remarkably broad range of results. The maximum power level reached after the reactor trip varies from 534 to 904 MW, the range of the time of the power maximum being close to 30 s. Compared to the total calculated transient time of 100 s, the effect of the secondary side modeling is extremely important
International Nuclear Information System (INIS)
Primm, R.T. III; Mincey, J.F.
1982-01-01
The Department of Energy's Consolidated Fuel Reprocessing Program has as a goal the design of nuclear fuel reprocessing equipment. In order to validate computer codes used for criticality analyses in the design of such equipment, k-effectives have been calculated for several U + Pu nitrate solution critical experiments. As of January 1981, descriptions of 45 unpoisoned, U + Pu solution experiments were available in the open literature. Twelve of these experiments were performed with solutions which have physical characteristics typical of dissolved, light water reactor fuel. This paper contains a discussion of these twelve experiments, a review of the calculational procedure used to determine k-effectives, and the results of the calculations
International Nuclear Information System (INIS)
El-Osery, I.A.
1981-01-01
The purpose of this paper is to discuss the theories, techniques and computer codes that are frequently used in numerical reactor criticality and burnup calculations. It is a part of an integrated nuclear reactor calculation scheme conducted by the Reactors Department, Inshas Nuclear Research Centre. The crude part in numerical reactor criticality and burnup calculations includes the determination of neutron flux distribution which can be obtained in principle as a solution of Boltzmann transport equation. Numerical methods used for solving transport equations are discussed. Emphasis are made on numerical techniques based on multigroup diffusion theory. These numerical techniques include nodal, modal, and finite difference ones. The most commonly known computer codes utilizing these techniques are reviewed. Some of the main computer codes that have been already developed at the Reactors Department and related to numerical reactor criticality and burnup calculations have been presented
International Nuclear Information System (INIS)
Georg, Dietmar; Stock, Markus; Kroupa, Bernhard; Olofsson, Joergen; Nyholm, Tufve; Ahnesjoe, Anders; Karlsson, Mikael
2007-01-01
Experimental methods are commonly used for patient-specific intensity-modulated radiotherapy (IMRT) verification. The purpose of this study was to investigate the accuracy and performance of independent dose calculation software (denoted as 'MUV' (monitor unit verification)) for patient-specific quality assurance (QA). 52 patients receiving step-and-shoot IMRT were considered. IMRT plans were recalculated by the treatment planning systems (TPS) in a dedicated QA phantom, in which an experimental 1D and 2D verification (0.3 cm 3 ionization chamber; films) was performed. Additionally, an independent dose calculation was performed. The fluence-based algorithm of MUV accounts for collimator transmission, rounded leaf ends, tongue-and-groove effect, backscatter to the monitor chamber and scatter from the flattening filter. The dose calculation utilizes a pencil beam model based on a beam quality index. DICOM RT files from patient plans, exported from the TPS, were directly used as patient-specific input data in MUV. For composite IMRT plans, average deviations in the high dose region between ionization chamber measurements and point dose calculations performed with the TPS and MUV were 1.6 ± 1.2% and 0.5 ± 1.1% (1 S.D.). The dose deviations between MUV and TPS slightly depended on the distance from the isocentre position. For individual intensity-modulated beams (total 367), an average deviation of 1.1 ± 2.9% was determined between calculations performed with the TPS and with MUV, with maximum deviations up to 14%. However, absolute dose deviations were mostly less than 3 cGy. Based on the current results, we aim to apply a confidence limit of 3% (with respect to the prescribed dose) or 6 cGy for routine IMRT verification. For off-axis points at distances larger than 5 cm and for low dose regions, we consider 5% dose deviation or 10 cGy acceptable. The time needed for an independent calculation compares very favourably with the net time for an experimental approach
Energy Technology Data Exchange (ETDEWEB)
Lin, L; Huang, S; Kang, M; Ainsley, C; Simone, C; McDonough, J; Solberg, T [University of Pennsylvania, Philadelphia, PA (United States)
2016-06-15
Purpose: Eclipse AcurosPT 13.7, the first commercial Monte Carlo pencil beam scanning (PBS) proton therapy treatment planning system (TPS), was experimentally validated for an IBA dedicated PBS nozzle in the CIRS 002LFC thoracic phantom. Methods: A two-stage procedure involving the use of TOPAS 1.3 simulations was performed. First, Geant4-based TOPAS simulations in this phantom were experimentally validated for single and multi-spot profiles at several depths for 100, 115, 150, 180, 210 and 225 MeV proton beams, using the combination of a Lynx scintillation detector and a MatriXXPT ionization chamber array. Second, benchmark calculations were performed with both AcurosPT and TOPAS in a phantom identical to the CIRS 002LFC, with the exception that the CIRS bone/mediastinum/lung tissues were replaced with similar tissues that are predefined in AcurosPT (a limitation of this system which necessitates the two stage procedure). Results: Spot sigmas measured in tissue were in agreement within 0.2 mm of TOPAS simulation for all six energies, while AcurosPT was consistently found to have larger spot sigma (<0.7 mm) than TOPAS. Using absolute dose calibration by MatriXXPT, the agreements between profiles measurements and TOPAS simulation, and calculation benchmarks are over 97% except near the end of range using 2 mm/2% gamma criteria. Overdosing and underdosing were observed at the low and high density side of tissue interfaces, respectively, and these increased with increasing depth and decreasing energy. Near the mediastinum/lung interface, the magnitude can exceed 5 mm/10%. Furthermore, we observed >5% quenching effect in the conversion of Lynx measurements to dose. Conclusion: We recommend the use of an ionization chamber array in combination with the scintillation detector to measure absolute dose and relative PBS spot characteristics. We also recommend the use of an independent Monte Carlo calculation benchmark for the commissioning of a commercial TPS. Partially
Literature research concerning alternative methods for validation of criticality calculation systems
International Nuclear Information System (INIS)
Behler, Matthias
2016-05-01
Beside radiochemical analysis of irradiated fuel and critical experiments, which has become a well-established basis for the validation of depletion code and criticality codes respectively, also results of oscillation experiments or the operating conditions of power reactor and research reactors can provide useful information for the validation of the above mentioned codes. Based on a literature review the potential of the utilization of oscillation experiment measurements for the validation of criticality codes is estimated. It is found that the reactivity measurements for actinides and fission products within the CERES program on the reactors DIMPLE (Winfrith, UK) and MINERVE (Cadarache, France) can give a valuable addition to the commonly used critical experiments for criticality code validation. However, there are approaches but yet no generally satisfactory solution for integrating the reactivity measurements in a quantitative bias determination for the neutron multiplication factor of typical application cases including irradiated spent fuel outside reactor cores, calculated using common criticality codes.
Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program
International Nuclear Information System (INIS)
Bess, John D.; Montierth, Leland; Köberl, Oliver
2014-01-01
Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the 235 U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of k eff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of k eff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments
Calculation code used in criticality analyses for the accident of JCO precipitation tank
International Nuclear Information System (INIS)
Miyoshi, Yoshinori
2000-01-01
In order to evaluate nuclear features on criticality accident formed at the nuclear fuel processing facility in Tokai Works of the JCO, Ltd. (JCO), in Tokai-mura, Ibaraki prefecture, dynamic analyses to calculate output change after occurring the accident as well as criticality analyses to calculate reactivity added to precipitation tank, were carried out according to scenario on accident formation. For the criticality analyses, a continuous energy Monte Carlo code MCNP was used to carry out calculation of reactivity fed into the precipitation tank as correctly as possible. And, SRAC code system was used for calculation on temperature and void reactivity coefficients, effective delayed neutron ratio beta eff , and instantaneous neutron generation time required for parameters controlling transition features at criticality accident. In addition, for the dynamic analyses, because of necessity of considering on volume expansion of solution fuels used as exothermic body and radiation decomposition gas forming into solution, output behavior, numbers of nuclear fission, and so forth at initial burst portion were calculated by using TRACE and quasi-regular code, at a center of AGNES-2 promoting on its development in JAERI. Here were reported on outlines and an analysis example on calculation code using for the nuclear features evaluation. (G.K.)
Directory of Open Access Journals (Sweden)
Wonkyeong Kim
2015-01-01
Full Text Available A high-leakage core has been known to be a challenging problem not only for a two-step homogenization approach but also for a direct heterogeneous approach. In this paper the DIMPLE S06 core, which is a small high-leakage core, has been analyzed by a direct heterogeneous modeling approach and by a two-step homogenization modeling approach, using contemporary code systems developed for reactor core analysis. The focus of this work is a comprehensive comparative analysis of the conventional approaches and codes with a small core design, DIMPLE S06 critical experiment. The calculation procedure for the two approaches is explicitly presented in this paper. Comprehensive comparative analysis is performed by neutronics parameters: multiplication factor and assembly power distribution. Comparison of two-group homogenized cross sections from each lattice physics codes shows that the generated transport cross section has significant difference according to the transport approximation to treat anisotropic scattering effect. The necessity of the ADF to correct the discontinuity at the assembly interfaces is clearly presented by the flux distributions and the result of two-step approach. Finally, the two approaches show consistent results for all codes, while the comparison with the reference generated by MCNP shows significant error except for another Monte Carlo code, SERPENT2.
Supplementary neutron-flux calculations for the ORNL Pool Critical Assembly Pressure Vessel Facility
International Nuclear Information System (INIS)
Maudlin, P.J.; Maerker, R.E.
1982-01-01
A three-dimensional Monte Carlo calculation using the MORSE code was performed to validate a procedure previously adopted in the ORNL discrete ordinate analysis of measurements made in the ORNL Pool Critical Assembly Pressure Vessel Facility. The results of these flux calculations agree, within statistical undertainties of about 5%, with those obtained from a discrete ordinate analysis employing the same procedure. This study therefore concludes that the procedure for combining several one- and two-dimensional discrete ordinate calculations into a three-dimensional flux is sufficiently accurate that it does not account for the existing discrepancies observed between calculations and measurements in this facility
Supplementary neutron-flux calculations for the ORNL Pool Critical Assembly Pressure Vessel Facility
Energy Technology Data Exchange (ETDEWEB)
Maudlin, P.J.; Maerker, R.E.
1982-01-01
A three-dimensional Monte Carlo calculation using the MORSE code was performed to validate a procedure previously adopted in the ORNL discrete ordinate analysis of measurements made in the ORNL Pool Critical Assembly Pressure Vessel Facility. The results of these flux calculations agree, within statistical undertainties of about 5%, with those obtained from a discrete ordinate analysis employing the same procedure. This study therefore concludes that the procedure for combining several one- and two-dimensional discrete ordinate calculations into a three-dimensional flux is sufficiently accurate that it does not account for the existing discrepancies observed between calculations and measurements in this facility.
International Nuclear Information System (INIS)
Woon, D.E.; Dunning, T.H. Jr.
1994-01-01
Benchmark calculations employing the correlation consistent basis sets of Dunning and co-workers are reported for the following diatomic species: Al 2 , Si 2 , P 2 , S 2 , Cl 2 , SiS, PS, PN, PO, and SO. Internally contracted multireference configuration interaction (CMRCI) calculations (correlating valence electrons only) have been performed for each species. For Cl 2 , P 2 , and PN, calculations have also been carried out using Moller--Plesset perturbation theory (MP2, MP3, MP4) and the singles and doubles coupled-cluster method with and without perturbative triples [CCSD, CCSD(T)]. Spectroscopic constants and dissociation energies are reported for the ground state of each species. In addition, the low-lying excited states of Al 2 and Si 2 have been investigated. Estimated complete basis set (CBS) limits for the dissociation energies, D e , and other spectroscopic constants are obtained from simple exponential extrapolations of the computed quantities. At the CBS limit the root-mean-square (rms) error in D e for the CMRCI calculations, the intrinsic error, on the ten species considered here is 3.9 kcal/mol; for r e the rms intrinsic error is 0.009 A, and for ω e it is 5.1 cm -1
Jansky, Bohumil; Rejchrt, Jiri; Novak, Evzen; Losa, Evzen; Blokhin, Anatoly I.; Mitenkova, Elena
2017-09-01
The leakage neutron spectra measurements have been done on benchmark spherical assemblies - iron spheres with diameter of 20, 30, 50 and 100 cm. The Cf-252 neutron source was placed into the centre of iron sphere. The proton recoil method was used for neutron spectra measurement using spherical hydrogen proportional counters with diameter of 4 cm and with pressure of 400 and 1000 kPa. The neutron energy range of spectrometer is from 0.1 to 1.3 MeV. This energy interval represents about 85 % of all leakage neutrons from Fe sphere of diameter 50 cm and about of 74% for Fe sphere of diameter 100 cm. The adequate MCNP neutron spectra calculations based on data libraries CIELO, JEFF-3.2 and ENDF/B-VII.1 were done. Two calculations were done with CIELO library. The first one used data for all Fe-isotopes from CIELO and the second one (CIELO-56) used only Fe-56 data from CIELO and data for other Fe isotopes were from ENDF/B-VII.1. The energy structure used for calculations and measurements was 40 gpd (groups per decade) and 200 gpd. Structure 200 gpd represents lethargy step about of 1%. This relatively fine energy structure enables to analyze the Fe resonance neutron energy structure. The evaluated cross section data of Fe were validated on comparisons between the calculated and experimental spectra.
Application of the annular dispersed flow model to two-phase critical flow calculation
International Nuclear Information System (INIS)
Ivandaev, A.I.; Nigmatulin, B.I.
1977-01-01
The application of the annular dispersed flow model with an effective monodisperse core to the calculation of vapour-liquid mixture maximum rates through long pipes is discussed. An effect of the main dominant parameters such as evaporation intensity, diameter of drops picked out from the film surface and initial drop diameter at the pipe inlet on the outlet critical condition formation process has been investigated. The corresponding model constants have been determined. The calculated and experimental values of critical rates and pressure profiles along the channel have been found to be in a satisfactory agreement in the studied range of parameters. The observed non-conformity of the calculated and experimental values of critical pressures and vapour contents can be due to inadequate accuracy of the experimental techniques
Experimental study and technique for calculation of critical heat fluxes in helium boiling in tubes
International Nuclear Information System (INIS)
Arkhipov, V.V.; Kvasnyuk, S.V.; Deev, V.I.; Andreev, V.K.
1979-01-01
Studied is the effect of regime parameters on critical heat loads in helium boiling in a vertical tube in the range of mass rates of 80 2 xc) and pressures of 100<=p<=200 kPa for the vapor content range corresponding to the heat exchange crisis of the first kind. The method for calculating critical heat fluxes describing experimental data with the error less than +-15% is proposed. The critical heat loads in helium boiling in tubes reduce with the growth of pressure and vapor content in the regime parameter ranges under investigation. Both positive and negative effects of the mass rate on the critical heat flux are observed. The calculation method proposed satisfactorily describes the experimental data
International Nuclear Information System (INIS)
Hoffman, E.L.; Ammerman, D.J.
1995-01-01
A series of tests investigating dynamic pulse buckling of a cylindrical shell under axial impact is compared to several 2D and 3D finite element simulations of the event. The purpose of the work is to investigate the performance of various analysis codes and element types on a problem which is applicable to radioactive material transport packages, and ultimately to develop a benchmark problem to qualify finite element analysis codes for the transport package design industry. During the pulse buckling tests, a buckle formed at each end of the cylinder, and one of the two buckles became unstable and collapsed. Numerical simulations of the test were performed using PRONTO, a Sandia developed transient dynamics analysis code, and ABAQUS/Explicit with both shell and continuum elements. The calculations are compared to the tests with respect to deformed shape and impact load history
Regional Competitive Intelligence: Benchmarking and Policymaking
Huggins , Robert
2010-01-01
Benchmarking exercises have become increasingly popular within the sphere of regional policymaking in recent years. The aim of this paper is to analyse the concept of regional benchmarking and its links with regional policymaking processes. It develops a typology of regional benchmarking exercises and regional benchmarkers, and critically reviews the literature, both academic and policy oriented. It is argued that critics who suggest regional benchmarking is a flawed concept and technique fai...
Calculation of the fissile mass of a graphite moderated critical assembly using 93% enriched uranium
International Nuclear Information System (INIS)
Correa, F.; Marzo, M.A.S.; Collussi, I.; Ferreira, A.C.A.
1976-01-01
The critical mass of uranium has been calculated for a graphite moderated set fueled with 93% enriched uranium to be mounted on the Instituto de Energia Atomica split table Zero Power Reactor. The core composition was optimized to permit the maximum number of configurations to be studied. Analysis of three core compositions shows that 8 Kg of uranium enriched to 93% - U-235 (by weight) and 100 Kg of thorium would be sufficient for criticality experiments [pt
Nuclear criticality safety calculations for a K-25 site vacuum cleaner
International Nuclear Information System (INIS)
Shor, J.T.; Haire, M.J.
1997-02-01
A modified Nilfisk model GSJ dry vacuum cleaner is used throughout the K-25 Site to collect dry forms of highly enriched uranium (HEU). When vacuuming, solids are collected in a cyclone-type separator vacuum cleaner body. Calculations were done with the SCALE (KENO V.a) computer code to establish conditions at which a nuclear criticality event might occur if the vacuum cleaner was filled with fissile solution. Conditions evaluated included full (12-in. water) reflection and nominal (1-in. water) reflection, and full (100%) and 20% 235 U enrichment. Validation analyses of SCALE/KENO and the SCALE 27-group cross sections for nuclear criticality safety applications indicate that a calculated k eff + 2σ eff + 2σ ≥ 0.9605 is considered unsafe and may be critical. Critical conditions were calculated to be 70 g U/L for 100% 235 U and full 12-in. water reflection. This corresponds to a minimum critical mass of approximately 1,400 g 235 U for the approximate 20.0-L volume of the vacuum cleaner. The actual volume of the vacuum cleaner is smaller than the modeled volume because some internal materials of construction were assumed to be fissile solution. The model was an overestimate, for conservatism, of fissile solution occupancy. At nominal reflection conditions, the critical concentration in a vacuum cleaner full of UO 2 F 2 solution was calculated to be 100 g 235 U/L, or 2,000 g mass of 100% 235 U. At 20% 235 U for the 20.0-L volume of the vacuum cleaner. At 15% 235 U enrichment and full reflection, critical conditions were not reached at any possible concentration of uranium as a uranyl fluoride solution. At 17.5% 235 U enrichment, criticality was reached at approximately 1,300 g U/L which is beyond saturation at 25 C
MCNP simulation of the TRIGA Mark II benchmark experiment
International Nuclear Information System (INIS)
Jeraj, R.; Glumac, B.; Maucec, M.
1996-01-01
The complete 3D MCNP model of the TRIGA Mark II reactor is presented. It enables precise calculations of some quantities of interest in a steady-state mode of operation. Calculational results are compared to the experimental results gathered during reactor reconstruction in 1992. Since the operating conditions were well defined at that time, the experimental results can be used as a benchmark. It may be noted that this benchmark is one of very few high enrichment benchmarks available. In our simulations experimental conditions were thoroughly simulated: fuel elements and control rods were precisely modeled as well as entire core configuration and the vicinity of the core. ENDF/B-VI and ENDF/B-V libraries were used. Partial results of benchmark calculations are presented. Excellent agreement of core criticality, excess reactivity and control rod worths can be observed. (author)
Calculated K-effectives using ENDF/B-V data for U + Pu solution critical experiments
International Nuclear Information System (INIS)
Primm, R.T. III; Mincey, J.F.
1981-01-01
Effective multiplication factors for 12 critical experiments have been calculated using multigroup cross sections derived from the ENDF/B-V library. All 12 experiments contained mixed plutonium and uranium nitrate solutions. The range of hydrogen-to-fissile plutonium atom ratios spanned by these experiments was 200 to 2200. A comparison with K-effectives calculated with ENDF/B-IV data is presented
Calculation of the ingestion critical dose rate for the Goiania radioactive waste repository
International Nuclear Information System (INIS)
Passos, E.M. dos; Martin Alves, A.S. De
1994-01-01
The calculation results of the critical distance for the ingestion dose rate due to a hypothetical Cs-137 release from the Abadia de Goias repository are shown. The work is based on the pathway repository-aquifer-well food chain. The calculations were based upon analytical models for the migration of radioisotopes through the aquifer and for its transfer from well water to food. (author)
Calculation of criticality of the AP600 reactor with KENO V.a code
Energy Technology Data Exchange (ETDEWEB)
Krumbein, A; Caner, M; Shapira, M [Israel Atomic Energy Commission, Yavne (Israel). Soreq Nuclear Research Center
1996-12-01
The Westinghouse AP600 PWR has been modeled using the KENO V.a three dimensional Monte Carlo criticality program of the SCALE-PC code system. These calculations and the use of a Monte Carlo neutron transport code such as KENO will provide us with an independent check on our WIMS/CITATION calculations for the AP600 as well as for other reactors. It will also enable us to model more complicated geometries. (authors).
Development of common user data model for APOLLO3 and MARBLE and application to benchmark problems
International Nuclear Information System (INIS)
Yokoyama, Kenji
2009-07-01
A Common User Data Model, CUDM, has been developed for the purpose of benchmark calculations between APOLLO3 and MARBLE code systems. The current version of CUDM was designed for core calculation benchmark problems with 3-dimensional Cartesian, 3-D XYZ, geometry. CUDM is able to manage all input/output data such as 3-D XYZ geometry, effective macroscopic cross section, effective multiplication factor and neutron flux. In addition, visualization tools for geometry and neutron flux were included. CUDM was designed by the object-oriented technique and implemented using Python programming language. Based on the CUDM, a prototype system for a benchmark calculation, CUDM-benchmark, was also developed. The CUDM-benchmark supports input/output data conversion for IDT solver in APOLLO3, and TRITAC and SNT solvers in MARBLE. In order to evaluate pertinence of CUDM, the CUDM-benchmark was applied to benchmark problems proposed by T. Takeda, G. Chiba and I. Zmijarevic. It was verified that the CUDM-benchmark successfully reproduced the results calculated with reference input data files, and provided consistent results among all the solvers by using one common input data defined by CUDM. In addition, a detailed benchmark calculation for Chiba benchmark was performed by using the CUDM-benchmark. Chiba benchmark is a neutron transport benchmark problem for fast criticality assembly without homogenization. This benchmark problem consists of 4 core configurations which have different sodium void regions, and each core configuration is defined by more than 5,000 fuel/material cells. In this application, it was found that the results by IDT and SNT solvers agreed well with the reference results by Monte-Carlo code. In addition, model effects such as quadrature set effect, S n order effect and mesh size effect were systematically evaluated and summarized in this report. (author)
Experimental and computational benchmark tests
International Nuclear Information System (INIS)
Gilliam, D.M.; Briesmeister, J.F.
1994-01-01
A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252 Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235 U, 239 Pu, 238 U, and 237 Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel
International Nuclear Information System (INIS)
Bitter, M.; Gu, M.F.; Vainshtein, L.A.; Beiersdorfer, P.; Bertschinger, G.; Marchuk, O.; Bell, R.; LeBlanc, B.; Hill, K.W.; Johnson, D.; Roquemore, L.
2003-01-01
Dielectronic satellite spectra of helium-like argon, recorded with a high-resolution X-ray crystal spectrometer at the National Spherical Torus Experiment, were found to be inconsistent with existing predictions resulting in unacceptable values for the power balance and suggesting the unlikely existence of non-Maxwellian electron energy distributions. These problems were resolved with calculations from a new atomic code. It is now possible to perform reliable electron temperature measurements and to eliminate the uncertainties associated with determinations of non-Maxwellian distributions
International Nuclear Information System (INIS)
Miller, Thomas Martin; Celik, Cihangir; Dunn, Michael E; Wagner, John C; McMahan, Kimberly L; Authier, Nicolas; Jacquet, Xavier; Rousseau, Guillaume; Wolff, Herve; Savanier, Laurence; Baclet, Nathalie; Lee, Yi-kang; Trama, Jean-Christophe; Masse, Veronique; Gagnier, Emmanuel; Naury, Sylvie; Blanc-Tranchant, Patrick; Hunter, Richard; Kim, Soon; Dulik, George Michael; Reynolds, Kevin H.
2015-01-01
In October 2010, a series of benchmark experiments were conducted at the French Commissariat a l'Energie Atomique et aux Energies Alternatives (CEA) Valduc SILENE facility. These experiments were a joint effort between the United States Department of Energy Nuclear Criticality Safety Program and the CEA. The purpose of these experiments was to create three benchmarks for the verification and validation of radiation transport codes and evaluated nuclear data used in the analysis of criticality accident alarm systems. This series of experiments consisted of three single-pulsed experiments with the SILENE reactor. For the first experiment, the reactor was bare (unshielded), whereas in the second and third experiments, it was shielded by lead and polyethylene, respectively. The polyethylene shield of the third experiment had a cadmium liner on its internal and external surfaces, which vertically was located near the fuel region of SILENE. During each experiment, several neutron activation foils and thermoluminescent dosimeters (TLDs) were placed around the reactor. Nearly half of the foils and TLDs had additional high-density magnetite concrete, high-density barite concrete, standard concrete, and/or BoroBond shields. CEA Saclay provided all the concrete, and the US Y-12 National Security Complex provided the BoroBond. Measurement data from the experiments were published at the 2011 International Conference on Nuclear Criticality (ICNC 2011) and the 2013 Nuclear Criticality Safety Division (NCSD 2013) topical meeting. Preliminary computational results for the first experiment were presented in the ICNC 2011 paper, which showed poor agreement between the computational results and the measured values of the foils shielded by concrete. Recently the hydrogen content, boron content, and density of these concrete shields were further investigated within the constraints of the previously available data. New computational results for the first experiment are now available
International Nuclear Information System (INIS)
Oliveira, C.R.E. de; Goddard, A.
1991-01-01
In this paper we review the current status of the finite element method applied to the solution of the neutron transport equation and we discuss its potential role in the field of criticality safety. We show that the method's ability in handling complex, irregular geometry in two- and three-dimensions coupled with its accurate solutions potentially renders it an attractive alternative to the longer-established Monte Carlo method. Details of the most favoured form of the method - that which combines finite elements in space and spherical harmonics in angle - are presented. This form of the method, which has been extensively investigated over the last decade by research groups at the University of London, has been numerically implemented in the finite element code EVENT. The code has among its main features the capability of solving fixed source eigenvalue and time-dependent complex geometry problems in two- and three-dimensions. Other features of the code include anisotropic up- and down-scatter, direct and/or adjoint solutions and access to standard data libraries. Numerical examples, ranging from simple criticality benchmark studies to the analysis of idealised three-dimensional reactor cores, are presented to demonstrate the potential of the method. (author)
Criticality coefficient calculation for a small PWR using Monte Carlo Transport Code
Energy Technology Data Exchange (ETDEWEB)
Trombetta, Debora M.; Su, Jian, E-mail: dtrombetta@nuclear.ufrj.br, E-mail: sujian@nuclear.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil); Chirayath, Sunil S., E-mail: sunilsc@tamu.edu [Department of Nuclear Engineering and Nuclear Security Science and Policy Institute, Texas A and M University, TX (United States)
2015-07-01
Computational models of reactors are increasingly used to predict nuclear reactor physics parameters responsible for reactivity changes which could lead to accidents and losses. In this work, preliminary results for criticality coefficient calculation using the Monte Carlo transport code MCNPX were presented for a small PWR. The computational modeling developed consists of the core with fuel elements, radial reflectors, and control rods inside a pressure vessel. Three different geometries were simulated, a single fuel pin, a fuel assembly and the core, with the aim to compare the criticality coefficients among themselves.The criticality coefficients calculated were: Doppler Temperature Coefficient, Coolant Temperature Coefficient, Coolant Void Coefficient, Power Coefficient, and Control Rod Worth. The coefficient values calculated by the MCNP code were compared with literature results, showing good agreement with reference data, which validate the computational model developed and allow it to be used to perform more complex studies. Criticality Coefficient values for the three simulations done had little discrepancy for almost all coefficients investigated, the only exception was the Power Coefficient. Preliminary results presented show that simple modelling as a fuel assembly can describe changes at almost all the criticality coefficients, avoiding the need of a complex core simulation. (author)
Criticality calculation of the deposits for the fuel elements in RP-10 nuclear research reactor
International Nuclear Information System (INIS)
Aguirre, Alvaro; Bruna, Ruben
2013-01-01
This paper shows the results of the criticality calculation of the deposits for irradiated and non-irradiated fuel elements in the RP-10 research reactor with MCNP5 code. In all cases and for normal and incidental conditions, the effective multiplication factor (K eff ) results less than 0,90 according to the acceptance criterion. (authors).
Calculation of critical concentrations of actinides in an infinite medium of silicon dioxide
International Nuclear Information System (INIS)
Okuno, Hiroshi; Sato, Shohei; Kawasaki, Hiromitsu
2009-01-01
The critical concentrations of actinides in metal-silicon-dioxide (SiO 2 ) and in metal-water (H 2 O) mixtures were calculated for 26 actinides including 233,235 U, 239,241 Pu, 242m Am, 243,245,247 Cm, and 249,251 Cf. The calculations were performed using the Monte Carlo neutron transport calculation code MCNP5 combined with the evaluated nuclear data library JENDL3.3. The results showed that the critical concentration of actinide in metal-SiO 2 mixtures was about 1/5 of that in metal-H 2 O mixtures for all the fissile nuclides investigated. The k ∞ 's of metal-SiO 2 and metal-H 2 O at one-half of the respective critical concentration of actinide, which was assumed as the subcritical concentration limit, were found to be less than 0.8 for all the actinides considered. By applying the sum-of-fractions rule to the concentrations of six nuclides in metal-SiO 2 mixtures, the subcriticality of high-level radioactive wastes was confirmed for a reported sample. The effects of different nuclear data libraries on the results of critical concentrations were found to be large for 242 Cm, 247 Cm, and 250 Cf by comparison with the results calculated with another evaluated nuclear data library, ENDF/B-VI. (author)
Aigyl Ilshatovna, Sabirova; Svetlana Fanilevna, Khasanova; Vildanovna, Nagumanova Regina
2018-05-01
On the basis of decision making theory (minimax and maximin approaches) the authors propose a technique with the results of calculations of the critical values of effectiveness indicators of agricultural producers in the Republic of Tatarstan for 2013-2015. There is justified necessity of monitoring the effectiveness of the state support and the direction of its improvement.
International Nuclear Information System (INIS)
Guillemot, M.; Colomb, G.
1985-01-01
A series of criticality benchmark experiments with a small LWR-type core, reflected by 30 cm of lead, was defined jointly by SEC (Service d'Etude de Criticite), Fontenay-aux-Roses, and SRD (Safety and Reliability Directorate). These experiments are very representative of the reflecting effect of lead, since the contribution of the lead to the reactivity was assessed as about 30% in Δ K. The experiments were carried out by SRSC (Service de Recherche en Surete et Criticite), Valduc, in December 1983 in the sub-critical facility called APPARATUS B. In addition, they confirmed and measured the effect on reactivity of a water gap between the core and the lead reflector; with a water gap of less than 1 cm, the reactivity can be greater than that of the core directly reflected the lead or by over 20 cm of water. The experimental results were to a large extent made use of by SRD with the aid of the MONK Monte Carlo code and to some extent by SEC with the aid of the MORET Monte Carlo Code. All the results obtained are presented in the summary tables. These experiments allowed to compare the different libraries of cross sections available
International Nuclear Information System (INIS)
Chadwick, M.B.; Young, P.G.
1994-08-01
The authors have developed the GNASH code to include photonuclear reactions for incident energies up to 140 MeV. Photoabsorption is modeled through the giant resonance at the lower energies, and the quasideuteron mechanism at the higher energies, and the angular momentum coupling of the incident photon to the target is properly accounted for. After the initial interaction, primary and multiple preequilibrium emission of fast particles can occur before compound nucleus decay from the equilibrated compound nucleus. The angular distributions from compound nucleus decay are taken as isotropic, and those from preequilibrium emission (which they obtain from a phase-space model which conserves momentum) are forward-peaked. To test the new modeling they apply the code to calculate photonuclear reactions on 208 Pb for incident energies up to 140 MeV
Sakamoto, Y
2002-01-01
In the prevention of nuclear disaster, there needs the information on the dose equivalent rate distribution inside and outside the site, and energy spectra. The three dimensional radiation transport calculation code is a useful tool for the site specific detailed analysis with the consideration of facility structures. It is important in the prediction of individual doses in the future countermeasure that the reliability of the evaluation methods of dose equivalent rate distribution and energy spectra by using of Monte Carlo radiation transport calculation code, and the factors which influence the dose equivalent rate distribution outside the site are confirmed. The reliability of radiation transport calculation code and the influence factors of dose equivalent rate distribution were examined through the analyses of critical accident at JCO's uranium processing plant occurred on September 30, 1999. The radiation transport calculations including the burn-up calculations were done by using of the structural info...
Critical mass calculations for 241Am, 242mAm and 243Am
International Nuclear Information System (INIS)
Dias, Hemanth; Tancock, Nigel; Clayton, Angela
2003-01-01
Criticality mass calculations are reported for 241 Am, 242m Am and 243 Am using the MONK and MCNP computer codes with the UKNDL, JEF-2.2, ENDF/B-VI and JENDL-3.2 nuclear data libraries. Results are reported for spheres of americium metal and dioxide in bare, water reflected and steel reflected systems. Comparison of results led to the identification of a serious inconsistency in the 241 Am ENDF/B-VI DICE library used by MONK - this demonstrates the importance of using different codes to verify critical mass calculations. The 241 Am critical mass estimates obtained using UKNDL and ENDF/B-VI show good agreement with experimentally inferred data, whilst both JEF-2.2 and JENDL-3.2 produce higher estimates of critical mass. The computed critical mass estimates for 242m Am obtained using ENDF/B-VI are lower than the results produced using the other nuclear data libraries - the ENDF/B-VI fission cross-section for 242m Am is significantly higher than the other evaluations in the fast region and is not supported by recent experimental data. There is wide variation in the computed 243 Am critical mass estimates suggesting that there is still considerable uncertainty in the 243 Am nuclear data. (author)
Directory of Open Access Journals (Sweden)
Wiji Suwarno
2017-02-01
Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.
Kosar, Naveen; Mahmood, Tariq; Ayub, Khurshid
2017-12-01
Benchmark study has been carried out to find a cost effective and accurate method for bond dissociation energy (BDE) of carbon halogen (Csbnd X) bond. BDE of C-X bond plays a vital role in chemical reactions, particularly for kinetic barrier and thermochemistry etc. The compounds (1-16, Fig. 1) with Csbnd X bond used for current benchmark study are important reactants in organic, inorganic and bioorganic chemistry. Experimental data of Csbnd X bond dissociation energy is compared with theoretical results. The statistical analysis tools such as root mean square deviation (RMSD), standard deviation (SD), Pearson's correlation (R) and mean absolute error (MAE) are used for comparison. Overall, thirty-one density functionals from eight different classes of density functional theory (DFT) along with Pople and Dunning basis sets are evaluated. Among different classes of DFT, the dispersion corrected range separated hybrid GGA class along with 6-31G(d), 6-311G(d), aug-cc-pVDZ and aug-cc-pVTZ basis sets performed best for bond dissociation energy calculation of C-X bond. ωB97XD show the best performance with less deviations (RMSD, SD), mean absolute error (MAE) and a significant Pearson's correlation (R) when compared to experimental data. ωB97XD along with Pople basis set 6-311g(d) has RMSD, SD, R and MAE of 3.14 kcal mol-1, 3.05 kcal mol-1, 0.97 and -1.07 kcal mol-1, respectively.
Shielding benchmark problems, (2)
International Nuclear Information System (INIS)
Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.
1980-02-01
Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)
Verification and validation benchmarks.
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, William Louis; Trucano, Timothy Guy
2007-02-01
Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of
Verification and validation benchmarks
International Nuclear Information System (INIS)
Oberkampf, William Louis; Trucano, Timothy Guy
2007-01-01
Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the
Verification and validation benchmarks
International Nuclear Information System (INIS)
Oberkampf, William L.; Trucano, Timothy G.
2008-01-01
Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the
International Nuclear Information System (INIS)
Koponen, B.L.; Hampel, V.E.
1982-01-01
This compilation contains 688 complete summaries of papers on nuclear criticality safety as presented at meetings of the American Nuclear Society (ANS). The selected papers contain criticality parameters for fissile materials derived from experiments and calculations, as well as criticality safety analyses for fissile material processing, transport, and storage. The compilation was developed as a component of the Nuclear Criticality Information System (NCIS) now under development at the Lawrence Livermore National Laboratory. The compilation is presented in two volumes: Volume 1 contains a directory to the ANS Transaction volume and page number where each summary was originally published, the author concordance, and the subject concordance derived from the keyphrases in titles. Volume 2 contains - in chronological order - the full-text summaries, reproduced here by permission of the American Nuclear Society from their Transactions, volumes 1-41
International Nuclear Information System (INIS)
Woon, D.E.; Peterson, K.A.; Dunning, T.H. Jr.
1998-01-01
The interaction of Ar with H 2 and HCl has been studied using Moeller - Plesset perturbation theory (MP2, MP3, MP4) and coupled-cluster [CCSD, CCSD(T)] methods with augmented correlation consistent basis sets. Basis sets as large as triply augmented quadruple zeta quality were used to investigate the convergence trends. Interaction energies were determined using the supermolecule approach with the counterpoise correction to account for basis set superposition error. Comparison with the available empirical potentials finds excellent agreement for both binding energies and transition state. For Ar - H 2 , the estimated complete basis set (CBS) limits for the binding energies of the two equivalent minima and the connecting transition state (TS) are, respectively, 55 and 47cm -1 at the MP4 level and 54 and 46cm -1 at the CCSD(T) level, respectively [the XC(fit) empirical potential of Bissonnette et al. [J. Chem. Phys. 105, 2639 (1996)] yields 56.6 and 47.8cm -1 for H 2 (v=0)]. The estimated CBS limits for the binding energies of the two minima and transition state of Ar - HCl are 185, 155, and 109cm -1 at the MP4 level and 176, 147, and 105cm -1 at the CCSD(T) level, respectively [the H6(4,3,0) empirical potential of Hutson [J. Phys. Chem. 96, 4237 (1992)] yields 176.0, 148.3, and 103.3cm -1 for HCl (v=0)]. Basis sets containing diffuse functions of (dfg) symmetries were found to be essential for accurately modeling these two complexes, which are largely bound by dispersion and induction forces. Highly correlated wave functions were also required for accurate results. This was found to be particularly true for ArHCl, where significant differences in calculated binding energies were observed between MP2, MP4, and CCSD(T). copyright 1998 American Institute of Physics
Energy Technology Data Exchange (ETDEWEB)
Kubas, Adam; Blumberger, Jochen, E-mail: j.blumberger@ucl.ac.uk [Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT (United Kingdom); Hoffmann, Felix [Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT (United Kingdom); Lehrstuhl für Theoretische Chemie, Ruhr-Universität Bochum, Universitätsstr. 150, 44801 Bochum (Germany); Heck, Alexander; Elstner, Marcus [Institute of Physical Chemistry, Karlsruhe Institute of Technology, Fritz-Haber-Weg 6, 76131 Karlsruhe (Germany); Oberhofer, Harald [Department of Chemistry, Technical University of Munich, Lichtenbergstr. 4, 85747 Garching (Germany)
2014-03-14
We introduce a database (HAB11) of electronic coupling matrix elements (H{sub ab}) for electron transfer in 11 π-conjugated organic homo-dimer cations. High-level ab inito calculations at the multireference configuration interaction MRCI+Q level of theory, n-electron valence state perturbation theory NEVPT2, and (spin-component scaled) approximate coupled cluster model (SCS)-CC2 are reported for this database to assess the performance of three DFT methods of decreasing computational cost, including constrained density functional theory (CDFT), fragment-orbital DFT (FODFT), and self-consistent charge density functional tight-binding (FODFTB). We find that the CDFT approach in combination with a modified PBE functional containing 50% Hartree-Fock exchange gives best results for absolute H{sub ab} values (mean relative unsigned error = 5.3%) and exponential distance decay constants β (4.3%). CDFT in combination with pure PBE overestimates couplings by 38.7% due to a too diffuse excess charge distribution, whereas the economic FODFT and highly cost-effective FODFTB methods underestimate couplings by 37.6% and 42.4%, respectively, due to neglect of interaction between donor and acceptor. The errors are systematic, however, and can be significantly reduced by applying a uniform scaling factor for each method. Applications to dimers outside the database, specifically rotated thiophene dimers and larger acenes up to pentacene, suggests that the same scaling procedure significantly improves the FODFT and FODFTB results for larger π-conjugated systems relevant to organic semiconductors and DNA.
Accelerator shielding benchmark problems
International Nuclear Information System (INIS)
Hirayama, H.; Ban, S.; Nakamura, T.
1993-01-01
Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)
International Nuclear Information System (INIS)
Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Kawai, Masayoshi; Nakazawa, Masaharu.
1978-09-01
Shielding benchmark problems were prepared by the Working Group of Assessment of Shielding Experiments in the Research Comittee on Shielding Design of the Atomic Energy Society of Japan, and compiled by the Shielding Laboratory of Japan Atomic Energy Research Institute. Twenty-one kinds of shielding benchmark problems are presented for evaluating the calculational algorithm and the accuracy of computer codes based on the discrete ordinates method and the Monte Carlo method and for evaluating the nuclear data used in the codes. (author)
Gamow's calculation of the neutron star's critical mass revised
International Nuclear Information System (INIS)
Ludwig, Hendrik; Ruffini, Remo
2014-01-01
It has at times been indicated that Landau introduced neutron stars in his classic paper of 1932. This is clearly impossible because the discovery of the neutron by Chadwick was submitted more than one month after Landau's work. Therefore, and according to his calculations, what Landau really did was to study white dwarfs, and the critical mass he obtained clearly matched the value derived by Stoner and later by Chandrasekhar. The birth of the concept of a neutron star is still today unclear. Clearly, in 1934, the work of Baade and Zwicky pointed to neutron stars as originating from supernovae. Oppenheimer in 1939 is also well known to have introduced general relativity (GR) in the study of neutron stars. The aim of this note is to point out that the crucial idea for treating the neutron star has been advanced in Newtonian theory by Gamow. However, this pioneering work was plagued by mistakes. The critical mass he should have obtained was 6.9 M, not the one he declared, namely, 1.5 M. Probably, he was taken to this result by the work of Landau on white dwarfs. We revise Gamow's calculation of the critical mass regarding calculational and conceptual aspects and discuss whether it is justified to consider it the first neutron-star critical mass. We compare Gamow's approach to other early and modern approaches to the problem.
Critical and subcritical mass calculations of fissionable nuclides based on JENDL-3.2+
International Nuclear Information System (INIS)
Okuno, H.
2002-01-01
We calculated critical and subcritical masses of 10 fissionable actinides ( 233 U, 235 U, 238 Pu, 239 Pu, 241 Pu, 242m Am, 243 Cm, 244 Cm, 249 Cf and 251 Cf) in metal and in metal-water mixtures (except 238 Pu and 244 Cm). The calculation was made with a combination of a continuous energy Monte Carlo neutron transport code, MCNP-4B2, and the latest released version of the Japanese Evaluated Nuclear Data Library, JENDL-3.2. Other evaluated nuclear data files, ENDF/B-VI, JEF-2.2, and JENDL-3.3 in its preliminary version were also applied to find differences in results originated from different nuclear data files. For the so-called big three fissiles ( 233 U, 235 U and 239 Pu), analyzing the criticality experiments cited in ICSBEP Handbook validated the code-library combination, and calculation errors were consequently evaluated. Estimated critical and lower limit critical masses of the big three in a sphere with/without a water or SS-304 reflector were supplied, and they were compared with the subcritical mass limits of ANS-8.1. (author)
International Nuclear Information System (INIS)
Apostolov, T; Manolova, M.; Prodanova, R.
2001-01-01
A methodology for criticality safety analysis of spent fuel casks with possibilities for burnup credit implementation is presented. This methodology includes the world well-known and applied program systems: NESSEL-NUKO for depletion and SCALE-4.4 for criticality calculations. The abilities of this methodology to analyze storage and transportation casks with different type of spent fuel are demonstrated on the base of various tests. The depletion calculations have been carried out for the power reactors (WWER-440 and WWER-1000) and the research reactor IRT-2000 (C-36) fuel assemblies. The criticality calculation models have been developed on the basis of real fuel casks, designed by the leading international companies (for WWER-440 and WWER-1000 spent fuel assemblies), as well as for real a WWER-440 storage cask, applied at the 'Kozloduy' NPP. The results obtained show that the criticality safety criterion K eff less than 0.95 is satisfied for both: fresh and spent fuel. Besides the implementation of burnup credit allows to account for the reduced reactivity of spent fuel and to evaluate the conservatism of the fresh fuel assumption. (author)
International Nuclear Information System (INIS)
Yamamoto, Toshihiro; Miyoshi, Yoshinori
2004-01-01
A new algorithm of Monte Carlo criticality calculations for implementing Wielandt's method, which is one of acceleration techniques for deterministic source iteration methods, is developed, and the algorithm can be successfully implemented into MCNP code. In this algorithm, part of fission neutrons emitted during random walk processes are tracked within the current cycle, and thus a fission source distribution used in the next cycle spread more widely. Applying this method intensifies a neutron interaction effect even in a loosely-coupled array where conventional Monte Carlo criticality methods have difficulties, and a converged fission source distribution can be obtained with fewer cycles. Computing time spent for one cycle, however, increases because of tracking fission neutrons within the current cycle, which eventually results in an increase of total computing time up to convergence. In addition, statistical fluctuations of a fission source distribution in a cycle are worsened by applying Wielandt's method to Monte Carlo criticality calculations. However, since a fission source convergence is attained with fewer source iterations, a reliable determination of convergence can easily be made even in a system with a slow convergence. This acceleration method is expected to contribute to prevention of incorrect Monte Carlo criticality calculations. (author)
Weterings, Peter J J M; Loftus, Christine; Lewandowski, Thomas A
2016-08-22
Potential adverse effects of chemical substances on thyroid function are usually examined by measuring serum levels of thyroid-related hormones. Instead, recent risk assessments for thyroid-active chemicals have focussed on iodine uptake inhibition, an upstream event that by itself is not necessarily adverse. Establishing the extent of uptake inhibition that can be considered de minimis, the chosen benchmark response (BMR), is therefore critical. The BMR values selected by two international advisory bodies were 5% and 50%, a difference that had correspondingly large impacts on the estimated risks and health-based guidance values that were established. Potential treatment-related inhibition of thyroidal iodine uptake is usually determined by comparing thyroidal uptake of radioactive iodine (RAIU) during treatment with a single pre-treatment RAIU value. In the present study it is demonstrated that the physiological intra-individual variation in iodine uptake is much larger than 5%. Consequently, in-treatment RAIU values, expressed as a percentage of the pre-treatment value, have an inherent variation, that needs to be considered when conducting dose-response analyses. Based on statistical and biological considerations, a BMR of 20% is proposed for benchmark dose analysis of human thyroidal iodine uptake data, to take the inherent variation in relative RAIU data into account. Implications for the tolerated daily intakes for perchlorate and chlorate, recently established by the European Food Safety Authority (EFSA), are discussed. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.
Exploring the use of a deterministic adjoint flux calculation in criticality Monte Carlo simulations
International Nuclear Information System (INIS)
Jinaphanh, A.; Miss, J.; Richet, Y.; Martin, N.; Hebert, A.
2011-01-01
The paper presents a preliminary study on the use of a deterministic adjoint flux calculation to improve source convergence issues by reducing the number of iterations needed to reach the converged distribution in criticality Monte Carlo calculations. Slow source convergence in Monte Carlo eigenvalue calculations may lead to underestimate the effective multiplication factor or reaction rates. The convergence speed depends on the initial distribution and the dominance ratio. We propose using an adjoint flux estimation to modify the transition kernel according to the Importance Sampling technique. This adjoint flux is also used as the initial guess of the first generation distribution for the Monte Carlo simulation. Calculated Variance of a local estimator of current is being checked. (author)
Alize 3 - first critical experiment for the franco-german high flux reactor - calculations
International Nuclear Information System (INIS)
Scharmer, K.
1969-01-01
The results of experiments in the light water cooled D 2 O reflected critical assembly ALIZE III have been compared to calculations. A diffusion model was used with 3 fast and epithermal groups and two overlapping thermal groups, which leads to good agreement of calculated and measured power maps, even in the case of strong variations of the neutron spectrum in the core. The difference of calculated and measured k eff was smaller than 0.5 per cent δk/k. Calculations of void and structure material coefficients of the reactivity of 'black' rods in the reflector, of spectrum variations (Cd-ratio, Pu-U-ratio) and to the delayed photoneutron fraction in the D 2 O reflector were made. Measurements of the influence of beam tubes on reactivity and flux distribution in the reflector were interpreted with regard to an optimum beam tube arrangement for the Franco- German High Flux Reactor. (author) [fr
International Nuclear Information System (INIS)
Riera, R.; Oliveira, P.M.C. de; Chaves, C.M.G.F.; Queiroz, S.L.A. de.
1980-04-01
A real-space renormalization group approach for the bond percolation problem in a square lattice with first- and second- neighbour bonds is proposed. The respective probabilities are treated, as independent variables. Two types of cells are constructed. In one of them the lattice is considered as two interpenetrating sublattices, first-neighbour bonds playing the role of intersublattice links. This allows the calculation of both critical exponents ν and γ, without resorting to any external field. Values found for the critical indices are in good agreement with data available in the literature. The phase diagram in parameter space is also obtained in each case. (Author) [pt
DEFF Research Database (Denmark)
Lawson, Lartey; Nielsen, Kurt
2005-01-01
We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....
DEFF Research Database (Denmark)
Peña, Alfredo
This report contains the description of a number of benchmarks with the purpose of evaluating flow models for near-shore wind resource estimation. The benchmarks are designed based on the comprehensive database of observations that the RUNE coastal experiment established from onshore lidar...
DEFF Research Database (Denmark)
Hougaard, Jens Leth; Tvede, Mich
2002-01-01
Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...
International Nuclear Information System (INIS)
Ivanova, T.; Polyakov, A.; Saraeva, T.; Tsiboulia, A.
2001-01-01
Validation of criticality calculations using SCALA was performed using data presented in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. This paper contains the results of statistical analysis of discrepancies between calculated and benchmark-model k eff and conclusions about uncertainties of criticality prediction for different types of multiplying systems following from this analysis. (authors)
International Nuclear Information System (INIS)
Pesic, M.
1998-01-01
A selected set of the RB reactor benchmark cores is presented in this paper. The first results of validation of the well-known Monte Carlo MCNP TM code and adjoining neutron cross section libraries are given. They confirm the idea for the proposal of the new U-D 2 O criticality benchmark system and support the intention to include this system in the next edition of the recent OECD/NEA Project: International Handbook of Evaluated Criticality Safety Experiment, in near future. (author)
International Nuclear Information System (INIS)
Berger, E.; Till, E.; Brenne, T.; Heath, A.; Hochholdinger, B.; Kassem-Manthey, K.; Kessler, L.; Koch, N.; Kortmann, G.; Kroeff, A.; Otto, T.; Verhoeven, H.; Steinbeck, G.; Vu, T.-C.; Wiegand, K.
2005-01-01
To increase the accuracy of finite element simulations in daily practice the local German and Austrian Deep Drawing Research Groups of IDDRG founded a special Working Group in year 2000. The main objective of this group was the continuously ongoing study and discussion of numerical / material effects in simulation jobs and to work out possible solutions. As a first theme of this group the intensive study of small die radii and the possibility of detecting material failure in these critical forming positions was selected. The part itself is a fictional body panel outside in which the original door handle of the VW Golf A4 has been constructed, a typical position of possible material necking or rupture in the press shop. All conditions to do a successful simulation have been taken care of in advance, material data, boundary conditions, friction, FLC and others where determined for the two materials in investigation - a mild steel and a dual phase steel HXT500X. The results of the experiments have been used to design the descriptions of two different benchmark runs for the simulation. The simulations with different programs as well as with different parameters showed on one hand negligible and on the other hand parameters with strong impact on the result - thereby having a different impact on a possible material failure prediction
An Improved Computational Method for the Calculation of Mixture Liquid-Vapor Critical Points
Dimitrakopoulos, Panagiotis; Jia, Wenlong; Li, Changjun
2014-05-01
Knowledge of critical points is important to determine the phase behavior of a mixture. This work proposes a reliable and accurate method in order to locate the liquid-vapor critical point of a given mixture. The theoretical model is developed from the rigorous definition of critical points, based on the SRK equation of state (SRK EoS) or alternatively, on the PR EoS. In order to solve the resulting system of nonlinear equations, an improved method is introduced into an existing Newton-Raphson algorithm, which can calculate all the variables simultaneously in each iteration step. The improvements mainly focus on the derivatives of the Jacobian matrix, on the convergence criteria, and on the damping coefficient. As a result, all equations and related conditions required for the computation of the scheme are illustrated in this paper. Finally, experimental data for the critical points of 44 mixtures are adopted in order to validate the method. For the SRK EoS, average absolute errors of the predicted critical-pressure and critical-temperature values are 123.82 kPa and 3.11 K, respectively, whereas the commercial software package Calsep PVTSIM's prediction errors are 131.02 kPa and 3.24 K. For the PR EoS, the two above mentioned average absolute errors are 129.32 kPa and 2.45 K, while the PVTSIM's errors are 137.24 kPa and 2.55 K, respectively.
Benchmark comparisons of evaluated nuclear data files
International Nuclear Information System (INIS)
Resler, D.A.; Howerton, R.J.; White, R.M.
1994-05-01
With the availability and maturity of several evaluated nuclear data files, it is timely to compare the results of integral tests with calculations using these different files. We discuss here our progress in making integral benchmark tests of the following nuclear data files: ENDL-94, ENDF/B-V and -VI, JENDL-3, JEF-2, and BROND-2. The methods used to process these evaluated libraries in a consistent way into applications files for use in Monte Carlo calculations is presented. Using these libraries, we are calculating and comparing to experiment k eff for 68 fast critical assemblies of 233,235 U and 239 Pu with reflectors of various material and thickness
The Davidson Method as an alternative to power iterations for criticality calculations
International Nuclear Information System (INIS)
Subramanian, C.; Van Criekingen, S.; Heuveline, V.; Nataf, F.; Have, P.
2011-01-01
The Davidson method is implemented within the neutron transport core solver parafish to solve k-eigenvalue criticality transport problems. The parafish solver is based on domain decomposition, uses spherical harmonics (P_N method) for angular discretization, and nonconforming finite elements for spatial discretization. The Davidson method is compared to the traditional power iteration method in that context. Encouraging numerical results are obtained with both sequential and parallel calculations. (author)
Improvement of the skeleton tables for calculation of the critical heat load
International Nuclear Information System (INIS)
Gotovskij, M.A.; Kvetnyj, M.A.
2002-01-01
Paper presents analysis of drawbacks of the skeleton tables of the critical heat flows applied in calculated heat and hydraulic codes. Paper demonstrates the necessity to take account of specific nature of mechanisms of dryout crisis, of boiling crisis at slow mass rates and the range of small underheatings up to temperature of saturation. Attention is drawn to necessity of detailed account of the natural limitations of the application field of the skeleton tables [ru
Prediction calculation of HTR-10 fuel loading for the first criticality
International Nuclear Information System (INIS)
Jing Xingqing; Yang Yongwei; Gu Yuxiang; Shan Wenzhi
2001-01-01
The 10 MW high temperature gas cooled reactor (HTR-10) was built at Institute of Nuclear Energy Technology, Tsinghua University, and the first criticality was attained in Dec. 2000. The high temperature gas cooled reactor physics simulation code VSOP was used for the prediction of the fuel loading for HTR-10 first criticality. The number of fuel element and graphite element was predicted to provide reference for the first criticality experiment. The prediction calculations toke into account the factors including the double heterogeneity of the fuel element, buckling feedback for the spectrum calculation, the effect of the mixture of the graphite and the fuel element, and the correction of the diffusion coefficients near the upper cavity based on the transport theory. The effects of impurities in the fuel and the graphite element in the core and those in the reflector graphite on the reactivity of the reactor were considered in detail. The first criticality experiment showed that the predicted values and the experiment results were in good agreement with little relative error less than 1%, which means the prediction was successful
International Nuclear Information System (INIS)
Golovko, Yury; Rozhikhin, Yevgeniy; Tsibulya, Anatoly; Koscheev, Vladimir
2008-01-01
Experiments with plutonium, low enriched uranium and uranium-233 from the ICSBEP Handbook are being considered in this paper. Among these experiments it was selected only those, which seem to be the most relevant to the evaluation of uncertainty of critical mass of mixtures of plutonium or low enriched uranium or uranium-233 with light water. All selected experiments were examined and covariance matrices of criticality uncertainties were developed along with some uncertainties were revised. Statistical analysis of these experiments was performed and some contradictions were discovered and eliminated. Evaluation of accuracy of prediction of criticality calculations was performed using the internally consistent set of experiments with plutonium, low enriched uranium and uranium-233 remained after the statistical analyses. The application objects for the evaluation of calculational prediction of criticality were water-reflected spherical systems of homogeneous aqueous mixtures of plutonium or low enriched uranium or uranium-233 of different concentrations which are simplified models of apparatus of external fuel cycle. It is shows that the procedure allows to considerably reduce uncertainty in k eff caused by the uncertainties in neutron cross-sections. Also it is shows that the results are practically independent of initial covariance matrices of nuclear data uncertainties. (authors)
International Nuclear Information System (INIS)
Dumonteil, E.; Le Peillet, A.; Lee, Y. K.; Petit, O.; Jouanne, C.; Mazzolo, A.
2006-01-01
The measurement of the stationarity of Monte Carlo fission source distributions in k eff calculations plays a central role in the ability to discriminate between fake and 'true' convergence (in the case of a high dominant ratio or in case of loosely coupled systems). Recent theoretical developments have been made in the study of source convergence diagnostics, using Shannon entropy. We will first recall those results, and we will then generalize them using the expression of Boltzmann entropy, highlighting the gain in terms of the various physical problems that we can treat. Finally we will present the results of several OECD/NEA benchmarks using the Tripoli-4 Monte Carlo code, enhanced with this new criterion. (authors)
Evaluation of the accuracy of group calculations for reactor criticality perturbations
International Nuclear Information System (INIS)
Dulin, V.A.
1985-09-01
For calculations of criticality perturbations it is necessary to use group constants which take into account not only the peculiarities of the intra-group flux but also those of the behaviour of the adjoint flux. A new method is proposed for obtaining bilinear-averaged constants of this type on the basis of the resonance characteristics of the importance function and the difference between the value of neutron importance at the group boundary and the group-averaged value (the bsup(+j) factor). A number of calculations are made for the ratios of reactivity coefficients in the BFS assemblies. Values have been obtained for the difference between the results of calculation with bilinear-averaged constants and those averaged conventionally (over flux). In many cases, this difference exceeds the experimental error. (author)
Intact and Degraded Component Criticality Calculations of N Reactor Spent Nuclear Fuel
International Nuclear Information System (INIS)
L. Angers
2001-01-01
The objective of this calculation is to perform intact and degraded mode criticality evaluations of the Department of Energy's (DOE) N Reactor Spent Nuclear Fuel codisposed in a 2-Defense High-Level Waste (2-DHLW)/2-Multi-Canister Overpack (MCO) Waste Package (WP) and emplaced in a monitored geologic repository (MGR) (see Attachment I). The scope of this calculation is limited to the determination of the effective neutron multiplication factor (k eff ) for both intact and degraded mode internal configurations of the codisposal waste package. This calculation will support the analysis that will be performed to demonstrate the technical viability for disposing of U-metal (N Reactor) spent nuclear fuel in the potential MGR
International Nuclear Information System (INIS)
Leszczynski, Francisco
2002-01-01
The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)
International Nuclear Information System (INIS)
Manturov, G.; Semenov, M.; Seregin, A.; Lykova, L.
2004-01-01
The BFS-62 critical experiments are currently used as 'benchmark' for verification of IPPE codes and nuclear data, which have been used in the study of loading a significant amount of Pu in fast reactors. The BFS-62 experiments have been performed at BFS-2 critical facility of IPPE (Obninsk). The experimental program has been arranged in such a way that the effect of replacement of uranium dioxied blanket by the steel reflector as well as the effect of replacing UOX by MOX on the main characteristics of the reactor model was studied. Wide experimental program, including measurements of the criticality-keff, spectral indices, radial and axial fission rate distributions, control rod mock-up worth, sodium void reactivity effect SVRE and some other important nuclear physics parameters, was fulfilled in the core. Series of 4 BFS-62 critical assemblies have been designed for studying the changes in BN-600 reactor physics from existing state to hybrid core. All the assemblies are modeling the reactor state prior to refueling, i.e. with all control rod mock-ups withdrawn from the core. The following items are chosen for the analysis in this report: Description of the critical assembly BFS-62-3A as the 3rd assembly in a series of 4 BFS critical assemblies studying BN-600 reactor with MOX-UOX hybrid zone and steel reflector; Development of a 3D homogeneous calculation model for the BFS-62-3A critical experiment as the mock-up of BN-600 reactor with hybrid zone and steel reflector; Evaluation of measured nuclear physics parameters keff and SVRE (sodium void reactivity effect); Preparation of adjusted equivalent measured values for keff and SVRE. Main series of calculations are performed using 3D HEX-Z diffusion code TRIGEX in 26 groups, with the ABBN-93 cross-section set. In addition, precise calculations are made, in 299 groups and Ps-approximation in scattering, by Monte-Carlo code MMKKENO and discrete ordinate code TWODANT. All calculations are based on the common system
International Nuclear Information System (INIS)
Medrano Asensio, Gregorio.
1976-06-01
A detailed power distribution calculation in a large power reactor requires the solution of the multigroup 3D diffusion equations. Using the finite difference method, this computation is too expensive to be performed for design purposes. This work is devoted to the single channel continous synthesis method: the choice of the trial functions and the determination of the mixing functions are discussed in details; 2D and 3D results are presented. The method is applied to the calculation of the IAEA ''Benchmark'' reactor and the results obtained are compared with a finite element resolution and with published results [fr
International Nuclear Information System (INIS)
Sakamoto, Yukio
2001-01-01
The information about neutrons at the surrounding of JCO site in the critical accident is limited to survey results by neutron Rem counter in the period of accident and activation data very near the test facility measured after the shut down of accident. This caused the big uncertainty in the dose estimation by detailed shielding calculation codes. On the other hand, environmental activity data measured by radiochemical researchers included the information about fast neutrons inside of JCO site and thermal neutrons up to 1 km from test facility. It is important to grasp the actual circumstance and examine the executed evaluation of the critical accident as scientifically as possible. Therefore, it is meaningful for different field researchers to corporate and exchange the information. In the Technical Divisions of Radiation Science and Technology in Atomic Energy Society of Japan, the information about neutron spectra are released from their home page and three groups of JAERI/CRC, Sumitomo Atomic Energy Industry and Nuclear Power Engineering Corp. (NUPEC)/Mitsubishi Research Institute Inc. (MRI), tried the shielding calculation by Monte Carlo Code MCNP-4B. The procedures and main results of shielding calculations were reviewed in this report. The main difference of shielding calculation by three groups was density and water content of autoclaved light-weight concrete (ALC) as the wall and ceiling. From the result by NUPEC/MRI, it was estimated that the water content in ALC was from 0.05 g/cm 3 to 0.10 g/cm 3 . The behavior of dose equivalent attenuation obtained by shielding calculation was very similar with the measured data from 250 m to 1,700 m obtained by survey meter, TLD and monitoring post. For more exact dose estimation, more detail examination of density and water content of ALC will be needed. (author)
Power reactor pressure vessel benchmarks
International Nuclear Information System (INIS)
Rahn, F.J.
1978-01-01
A review is given of the current status of experimental and calculational benchmarks for use in understanding the radiation embrittlement effects in the pressure vessels of operating light water power reactors. The requirements of such benchmarks for application to pressure vessel dosimetry are stated. Recent developments in active and passive neutron detectors sensitive in the ranges of importance to embrittlement studies are summarized and recommendations for improvements in the benchmark are made. (author)
Review for session K - benchmarks
International Nuclear Information System (INIS)
McCracken, A.K.
1980-01-01
Eight of the papers to be considered in Session K are directly concerned, at least in part, with the Pool Critical Assembly (P.C.A.) benchmark at Oak Ridge. The remaining seven papers in this session, the subject of this review, are concerned with a variety of topics related to the general theme of Benchmarks and will be considered individually
Benchmark for Strategic Performance Improvement.
Gohlke, Annette
1997-01-01
Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)
Reference calculations on critical assemblies with Apollo2 code working with a fine multigroup mesh
International Nuclear Information System (INIS)
Aggery, A.
1999-12-01
The objective of this thesis is to add to the multigroup transport code APOLLO2 the capability to perform deterministic reference calculations, for any type of reactor, using a very fine energy mesh of several thousand groups. This new reference tool allows us to validate the self-shielding model used in industrial applications, to perform depletion calculations, differential effects calculations, critical buckling calculations or to evaluate precisely data required by the self shielding model. At its origin, APOLLO2 was designed to perform routine calculations with energy meshes around one hundred groups. That is why, in the current format of cross sections libraries, almost each value of the multigroup energy transfer matrix is stored. As this format is not convenient for a high number of groups (concerning memory size), we had to search out a new format for removal matrices and consequently to modify the code. In the new format we found, only some values of removal matrices are kept (these values depend on a reconstruction precision choice), the other ones being reconstructed by a linear interpolation, what reduces the size of these matrices. Then we had to show that APOLLO2 working with a fine multigroup mesh had the capability to perform reference calculations on any assembly geometry. For that, we successfully carried out the validation with several calculations for which we compared APOLLO2 results (obtained with the universal mesh of 11276 groups) to results obtained with Monte Carlo codes (MCNP, TRIPOLI4). Physical analysis led with this new tool have been very fruitful and show a great potential for such an R and D tool. (author)
Energy Technology Data Exchange (ETDEWEB)
Constantin, M; Sawkey, D; Johnsen, S; Hsu, H [Varian Medical Systems, Palo Alto, CA (United States)
2014-06-01
Purpose: To validate the physics parameters of a Monte Carlo model for patient plane leakage calculations on the 6MV Unique linac by comparing the simulations against IEC patient plane leakage measurements. The benchmarked model can further be used for shielding design optimization, to predict leakage in the proximity of intended treatment fields, reduce the system weight and cost, and improve components reliability. Methods: The treatment head geometry of the Unique linac was simulated in Geant4 (v9.4.p02 with “Opt3” standard electromagnetic physics list) based on CAD drawings of all collimation and shielding components projected from the target to the area within 2m from isocenter. A 4×4m2 scorer was inserted 1m from the target in the patient plane and multiple phase space files were recorded by performing a 40-node computing cluster simulation on the EC2 cloud. The photon energy fluence was calculated relative to the value at isocenter for a 10×10cm2 field using 10×10mm2 bins. Tungsten blocks were parked accordingly to represent MLC120. The secondary particle contamination to patient plane was eliminated by “killing” those particles prior to the primary collimator entrance using a “kill-plane”, which represented the upper head shielding components not being modeled. Both IEC patient-plane leakage and X/Y-jaws transmission were simulated. Results: The contribution of photons to energy fluence was 0.064% on average, in excellent agreement with the experimental data available at 0.5, 1.0, and 1.5m from isocenter, characterized by an average leakage of 0.045% and a maximum leakage of 0.085%. X- and Y-jaws transmissions of 0.43% and 0.44% were found in good agreement with measurements of 0.48% and 0.43%, respectively. Conclusion: A Geant4 model based on energy fluence calculations for the 6MV Unique linac was created and validated using IEC patient plane leakage measurements. The “kill-plane” has effectively eliminated electron contamination to
Static analysis of material testing reactor cores:critical core calculations
International Nuclear Information System (INIS)
Nawaz, A. A.; Khan, R. F. H.; Ahmad, N.
1999-01-01
A methodology has been described to study the effect of number of fuel plates per fuel element on critical cores of Material Testing Reactors (MTR). When the number of fuel plates are varied in a fuel element by keeping the fuel loading per fuel element constant, the fuel density in the fuel plates varies. Due to this variation, the water channel width needs to be recalculated. For a given number of fuel plates, water channel width was determined by optimizing k i nfinity using a transport theory lattice code WIMS-D/4. The dimensions of fuel element and control fuel element were determined using this optimized water channel width. For the calculated dimensions, the critical cores were determined for the given number of fuel plates per fuel element by using three dimensional diffusion theory code CITATION. The optimization of water channel width gives rise to a channel width of 2.1 mm when the number of fuel plates is 23 with 290 g ''2''3''5U fuel loading which is the same as in the case of Pakistan Reactor-1 (PARR-1). Although the decrease in number of fuel element results in an increase in optimal water channel width but the thickness of standard fuel element (SFE) and control fuel element (CFE) decreases and it gives rise to compact critical and equilibrium cores. The criticality studies of PARR-1 are in good agreement with the predictions
Calculation and analysis of burnup and optimum core design in accelerator driven sub-critical system
International Nuclear Information System (INIS)
Wang Yuwei; Yang Yongwei; Cui Pengfei
2011-01-01
The premise of the accelerator driven sub-critical system (ADS) in the accident is still subcritical, the biggest k eff change with burn time is less than 1.5% and the cladding material, HT9 steel, can withstand the maximum radiation damage, core fuel area is divided into fuel transmutation area and fuel multiplication area, and fuel transmutation area maintains the same fuel composition in the whole process. Through the analysis of the composition of the fuel, shape of core layout and the power distribution, etc., supposed outer and inner Pu enrichment ratio range of 1.0-1.5, then the fuel components of fuel multiplication area was adjusted. Time evolution of k eff was calculated by COUPLED2 which coupled with MCNP and ORIGEN. At the same time the power peaking factors, minoractinides transmutation rate desired to maximization and burnup were considered. A sub-critical system fitting for engineering practice was established. (authors)
International Nuclear Information System (INIS)
Griesheimer, D. P.; Toth, B. E.
2007-01-01
A novel technique for accelerating the convergence rate of the iterative power method for solving eigenvalue problems is presented. Smoothed Residual Acceleration (SRA) is based on a modification to the well known fixed-parameter extrapolation method for power iterations. In SRA the residual vector is passed through a low-pass filter before the extrapolation step. Filtering limits the extrapolation to the lower order Eigenmodes, improving the stability of the method and allowing the use of larger extrapolation parameters. In simple tests SRA demonstrates superior convergence acceleration when compared with an optimal fixed-parameter extrapolation scheme. The primary advantage of SRA is that it can be easily applied to Monte Carlo criticality calculations in order to reduce the number of discard cycles required before a stationary fission source distribution is reached. A simple algorithm for applying SRA to Monte Carlo criticality problems is described. (authors)
Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1
International Nuclear Information System (INIS)
Van Der Marck, S. C.
2012-01-01
Three nuclear data libraries have been tested extensively using criticality safety benchmark calculations. The three libraries are the new release of the US library ENDF/B-VII.1 (2011), the new release of the Japanese library JENDL-4.0 (2011), and the OECD/NEA library JEFF-3.1 (2006). All calculations were performed with the continuous-energy Monte Carlo code MCNP (version 4C3, as well as version 6-beta1). Around 2000 benchmark cases from the International Handbook of Criticality Safety Benchmark Experiments (ICSBEP) were used. The results were analyzed per ICSBEP category, and per element. Overall, the three libraries show similar performance on most criticality safety benchmarks. The largest differences are probably caused by elements such as Be, C, Fe, Zr, W. (authors)
Monte Carlo sampling on technical parameters in criticality and burn-up-calculations
International Nuclear Information System (INIS)
Kirsch, M.; Hannstein, V.; Kilger, R.
2011-01-01
The increase in computing power over the recent years allows for the introduction of Monte Carlo sampling techniques for sensitivity and uncertainty analyses in criticality safety and burn-up calculations. With these techniques it is possible to assess the influence of a variation of the input parameters within their measured or estimated uncertainties on the final value of a calculation. The probabilistic result of a statistical analysis can thus complement the traditional method of figuring out both the nominal (best estimate) and the bounding case of the neutron multiplication factor (k eff ) in criticality safety analyses, e.g. by calculating the uncertainty of k eff or tolerance limits. Furthermore, the sampling method provides a possibility to derive sensitivity information, i.e. it allows figuring out which of the uncertain input parameters contribute the most to the uncertainty of the system. The application of Monte Carlo sampling methods has become a common practice in both industry and research institutes. Within this approach, two main paths are currently under investigation: the variation of nuclear data used in a calculation and the variation of technical parameters such as manufacturing tolerances. This contribution concentrates on the latter case. The newly developed SUnCISTT (Sensitivities and Uncertainties in Criticality Inventory and Source Term Tool) is introduced. It defines an interface to the well established GRS tool for sensitivity and uncertainty analyses SUSA, that provides the necessary statistical methods for sampling based analyses. The interfaced codes are programs that are used to simulate aspects of the nuclear fuel cycle, such as the criticality safety analysis sequence CSAS5 of the SCALE code system, developed by Oak Ridge National Laboratories, or the GRS burn-up system OREST. In the following, first the implementation of the SUnCISTT will be presented, then, results of its application in an exemplary evaluation of the neutron
Critical experiment tests of bowing and expansion reactivity calculations for LMRS
International Nuclear Information System (INIS)
Schaefer, R.W.
1988-01-01
Experiments done in several LMR-type critical assemblies simulated core axial expansion, core radial expansion and bowing, coolant expansion, and control driveline expansion. For the most part new experimental techniques were developed to do these experiments. Calculations of the experiments basically used design-level methods, except when it was necessary to investigate complexities peculiar to the experiments. It was found that these feedback reactivities generally are overpredicted, but the predictions are within 30% of the experimental values. 14 refs., 2 figs., 4 tabs
Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui
2004-01-01
A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks.
Investigating the minimum achievable variance in a Monte Carlo criticality calculation
Energy Technology Data Exchange (ETDEWEB)
Christoforou, Stavros; Eduard Hoogenboom, J. [Delft University of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)
2008-07-01
The sources of variance in a Monte Carlo criticality calculation are identified and their contributions analyzed. A zero-variance configuration is initially simulated using analytically calculated adjoint functions for biasing. From there, the various sources are analyzed. It is shown that the minimum threshold comes from the fact that the fission source is approximated. In addition, the merits of a simple variance reduction method, such as implicit capture, are shown when compared to an analog simulation. Finally, it is shown that when non-exact adjoint functions are used for biasing, the variance reduction is rather insensitive to the quality of the adjoints, suggesting that the generation of the adjoints should have as low CPU cost as possible, in order to o et the CPU cost in the implementation of the biasing of a simulation. (authors)
Calculational criticality analyses of 10- and 20-MW UF6 freezer/sublimer vessels
International Nuclear Information System (INIS)
Jordan, W.C.
1993-02-01
Calculational criticality analyses have been performed for 10- and 20-MW UF 6 freezer/sublimer vessels. The freezer/sublimers have been analyzed over a range of conditions that encompass normal operation and abnormal conditions. The effects of HF moderation of the UF 6 in each vessel have been considered for uranium enriched between 2 and 5 wt % 235 U. The results indicate that the nuclearly safe enrichments originally established for the operation of a 10-MW freezer/sublimer, based on a hydrogen-to-uranium moderation ratio of 0.33, are acceptable. If strict moderation control can be demonstrated for hydrogen-to-uranium moderation ratios that are less than 0.33, then the enrichment limits for the 10-MW freezer/sublimer may be increased slightly. The calculations performed also allow safe enrichment limits to be established for a 20-NM freezer/sublimer under moderation control
Calculation and analysis for a series of enriched uranium bare sphere critical assemblies
International Nuclear Information System (INIS)
Yang Shunhai
1994-12-01
The imported reactor fuel assembly MARIA program system is adapted to CYBER 825 computer in China Institute of Atomic Energy, and extensively used for a series of enriched uranium bare sphere critical assemblies. The MARIA auxiliary program of resonance modification MA is designed for taking account of the effects of resonance fission and absorption on calculated results. By which, the multigroup constants in the library attached to MARIA program are revised based on the U.S. Evaluated Nuclear Data File ENDF/B-IV, the related nuclear data files are replaced. And then, the reactor geometry buckling and multiplication factor are given in output tapes. The accuracy of calculated results is comparable with those of Monte Carlo and Sn method, and the agreement with experiment result is in 1%. (5 refs., 4 figs., 3 tabs.)
Three-dimensional RAMA fluence methodology benchmarking
International Nuclear Information System (INIS)
Baker, S. P.; Carter, R. G.; Watkins, K. E.; Jones, D. B.
2004-01-01
This paper describes the benchmarking of the RAMA Fluence Methodology software, that has been performed in accordance with U. S. Nuclear Regulatory Commission Regulatory Guide 1.190. The RAMA Fluence Methodology has been developed by TransWare Enterprises Inc. through funding provided by the Electric Power Research Inst., Inc. (EPRI) and the Boiling Water Reactor Vessel and Internals Project (BWRVIP). The purpose of the software is to provide an accurate method for calculating neutron fluence in BWR pressure vessels and internal components. The Methodology incorporates a three-dimensional deterministic transport solution with flexible arbitrary geometry representation of reactor system components, previously available only with Monte Carlo solution techniques. Benchmarking was performed on measurements obtained from three standard benchmark problems which include the Pool Criticality Assembly (PCA), VENUS-3, and H. B. Robinson Unit 2 benchmarks, and on flux wire measurements obtained from two BWR nuclear plants. The calculated to measured (C/M) ratios range from 0.93 to 1.04 demonstrating the accuracy of the RAMA Fluence Methodology in predicting neutron flux, fluence, and dosimetry activation. (authors)
On the thermal scattering law data for reactor lattice calculations
International Nuclear Information System (INIS)
Trkov, A.; Mattes, M.
2004-01-01
Thermal scattering law data for hydrogen bound in water, hydrogen bound in zirconium hydride and deuterium bound in heavy water have been re-evaluated. The influence of the thermal scattering law data on critical lattices has been studied with detailed Monte Carlo calculations and a summary of results is presented for a numerical benchmark and for the TRIGA reactor benchmark. Systematics for a large sequence of benchmarks analysed with the WIMS-D lattice code are also presented. (author)
Benchmarking and Performance Management
Directory of Open Access Journals (Sweden)
Adrian TANTAU
2010-12-01
Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.
Directory of Open Access Journals (Sweden)
Daniela Niculescu
2016-02-01
Full Text Available Organisational culture and employee engagement have been the focus of recent broad-based research efforts. Adding this concern to the revealed importance of performance indicators on human capital, and that their use is getting momentum, in order to attach financial values to knowledge management assets, it becomes more and more critical to measure human capital value. Key for Romanian FSO’s managers becomes to consider that both human and financial values have a focus on adding value in every process and function in the organisation, and to perpetuate organisational profitability by the corporate culture, on the one hand, where culture is a powerful factor that helps a company to engage, on the other hand, talented people. There is a substantial concern on using ROI on Learning and Development programmes, but whilst this is still declared, Romanian FSOs do not yet have a consistent method to measure it. This study is showing the criticality of connecting people to financial results and data analysis suggests that ROI calculation has a positive impact on creating and fostering a powerful organisational culture and that employees’ awareness of ROI values within their organisation has a powerful effect on their sense of engagement. Our findings have a more practical implication for the analysed industry by shaping a formal ROI measurement mechanisms blueprint, an ROI calculation model for the Romanian FSOs, in the form of a mechanism that could be employed when considering the design of an ROI Methodology for Romanian FSOs.
Three calculations of free cortisol versus measured values in the critically ill.
Molenaar, Nienke; Groeneveld, A B Johan; de Jong, Margriet F C
2015-11-01
To investigate the agreement between the calculated free cortisol levels according to widely applied Coolens and adjusted Södergård equations with measured levels in the critically ill. A prospective study in a mixed intensive care unit. We consecutively included 103 patients with treatment-insensitive hypotension in whom an adrenocorticotropic hormone (ACTH) test (250μg) was performed. Serum total and free cortisol (equilibrium dialysis), corticosteroid-binding globulin and albumin were assessed. Free cortisol was estimated by the Coolens method (C) and two adjusted Södergård (S1 and S2) equations. Bland Altman plots were made. The bias for absolute (t=0, 30 and 60min after ACTH injection) cortisol levels was 38, -24, 41nmol/L when the C, S1 and S2 equations were used, with 95% limits of agreement between -65-142, -182-135, and -57-139nmol/L and percentage errors of 66, 85, and 64%, respectively. Bias for delta (peak-baseline) cortisol was 14, -31 and 16nmol/L, with 95% limits of agreement between -80-108, -157-95, and -74-105nmol/L, and percentage errors of 107, 114, and 100% for C, S1 and S2 equations, respectively. Calculated free cortisol levels have too high bias and imprecision to allow for acceptable use in the critically ill. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
DEFF Research Database (Denmark)
Agrell, Per J.; Bogetoft, Peter
2017-01-01
Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of bench-marking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...
DEFF Research Database (Denmark)
Agrell, Per J.; Bogetoft, Peter
2017-01-01
Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...
Critical groups vs. representative person: dose calculations due to predicted releases from USEXA
Energy Technology Data Exchange (ETDEWEB)
Ferreira, N.L.D., E-mail: nelson.luiz@ctmsp.mar.mil.br [Centro Tecnologico da Marinha (CTM/SP), Sao Paulo, SP (Brazil); Rochedo, E.R.R., E-mail: elainerochedo@gmail.com [Instituto de Radiprotecao e Dosimetria (lRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Mazzilli, B.P., E-mail: mazzilli@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2013-07-01
The critical group cf Centro Experimental Aramar (CEA) site was previously defined based 00 the effluents releases to the environment resulting from the facilities already operational at CEA. In this work, effective doses are calculated to members of the critical group considering the predicted potential uranium releases from the Uranium Hexafluoride Production Plant (USEXA). Basically, this work studies the behavior of the resulting doses related to the type of habit data used in the analysis and two distinct situations are considered: (a) the utilization of average values obtained from official institutions (IBGE, IEA-SP, CNEN, IAEA) and from the literature; and (b) the utilization of the 95{sup tb} percentile of the values derived from distributions fit to the obtained habit data. The first option corresponds to the way that data was used for the definition of the critical group of CEA done in former assessments, while the second one corresponds to the use of data in deterministic assessments, as recommended by ICRP to estimate doses to the so--called 'representative person' . (author)
Critical groups vs. representative person: dose calculations due to predicted releases from USEXA
International Nuclear Information System (INIS)
Ferreira, N.L.D.; Rochedo, E.R.R.; Mazzilli, B.P.
2013-01-01
The critical group cf Centro Experimental Aramar (CEA) site was previously defined based 00 the effluents releases to the environment resulting from the facilities already operational at CEA. In this work, effective doses are calculated to members of the critical group considering the predicted potential uranium releases from the Uranium Hexafluoride Production Plant (USEXA). Basically, this work studies the behavior of the resulting doses related to the type of habit data used in the analysis and two distinct situations are considered: (a) the utilization of average values obtained from official institutions (IBGE, IEA-SP, CNEN, IAEA) and from the literature; and (b) the utilization of the 95 tb percentile of the values derived from distributions fit to the obtained habit data. The first option corresponds to the way that data was used for the definition of the critical group of CEA done in former assessments, while the second one corresponds to the use of data in deterministic assessments, as recommended by ICRP to estimate doses to the so--called 'representative person' . (author)
Monte Carlo benchmarking: Validation and progress
International Nuclear Information System (INIS)
Sala, P.
2010-01-01
Document available in abstract form only. Full text of publication follows: Calculational tools for radiation shielding at accelerators are faced with new challenges from the present and next generations of particle accelerators. All the details of particle production and transport play a role when dealing with huge power facilities, therapeutic ion beams, radioactive beams and so on. Besides the traditional calculations required for shielding, activation predictions have become an increasingly critical component. Comparison and benchmarking with experimental data is obviously mandatory in order to build up confidence in the computing tools, and to assess their reliability and limitations. Thin target particle production data are often the best tools for understanding the predictive power of individual interaction models and improving their performances. Complex benchmarks (e.g. thick target data, deep penetration, etc.) are invaluable in assessing the overall performances of calculational tools when all ingredients are put at work together. A review of the validation procedures of Monte Carlo tools will be presented with practical and real life examples. The interconnections among benchmarks, model development and impact on shielding calculations will be highlighted. (authors)
Conservatism in SRS Criticality Alarm System 12 Rad Zone Calculations - How Much is Enough?
International Nuclear Information System (INIS)
Yates, K.R.
2002-01-01
Savannah River Site (SRS) uses two methods (i.e., Approximate Method and MCNP) of calculating the 12-rad zone. The reasons for the two-tier approach are described in Ref. 1 and 2. Lately, there have been occasions in which the use of either the Approximate Method (AM) or MCNP3 calculations indicated potential facility impacts. For example, one or both methods may indicate that a 12-rad zone extends outside of relatively thick shielding, or extends to the roof of a facility, or extends through shielding to part of a stairwell. In such cases, a criticality alarm system may have to be installed to protect workers in a small, localized area from a potential dose that is not substantially greater than 12 rad in air. But, is the potential dose really greater than 12 rad in air? A subcommittee was appointed to look into the two 12-rad zone calculation methods for the purpose of identifying items contributing to over-conservatism and under-conservatism, and to recommend a path forward
Coupled core criticality calculations with control rods located in the central reflector region
Energy Technology Data Exchange (ETDEWEB)
Sobhy, M [Reactor depatrment, nuclear research center, Inshaas (Egypt)
1995-10-01
The reactivity of a coupled core is controlled by a set of control rods distributed in the central reflector region. The reactor contains two compact cores cooled and moderated by light water. Control rods are designed to have reactivity worths sufficient to start, control and shutdown the coupled system. Each core in a coupled system is in subcritical conditions without any absorber then each core needs to the other core to fulfill nuclear chain reaction and to approach the criticality. In this case, each core is considered clean which is suitable for research reactor with low flux disturbance and better neutron economy, in addition to the advantage of disappearing the cut corner fuel baskets. This facilitate the in core fuel management with identical fuel baskets. Hot spots will disappear. This leads to a good heat transfer process. the excess reactivity and the shutdown margin are calculated for some of reflector as coupling region gives sufficient area for coupled core are calculated cost. The fluctuations of reactivity for coupled core are calculated by noise analysis technique and compared with that for rode core. The results show low reactivity perturbation associated with coupled core.
International Nuclear Information System (INIS)
Dos Santos, Adimir; Siqueira, Paulo de Tarso D.; Andrade e Silva, Graciete Simões; Grant, Carlos; Tarazaga, Ariel E.; Barberis, Claudia
2013-01-01
In year 2008 the Atomic Energy National Commission (CNEA) of Argentina, and the Brazilian Institute of Energetic and Nuclear Research (IPEN), under the frame of Nuclear Energy Argentine Brazilian Agreement (COBEN), among many others, included the project “Validation and Verification of Calculation Methods used for Research and Experimental Reactors . At this time, it was established that the validation was to be performed with models implemented in the deterministic codes HUEMUL and PUMA (cell and reactor codes) developed by CNEA and those ones implemented in MCNP by CNEA and IPEN. The necessary data for these validations would correspond to theoretical-experimental reference cases in the research reactor IPEN/MB-01 located in São Paulo, Brazil. The staff of the group Reactor and Nuclear Power Studies (SERC) of CNEA, from the argentine side, performed calculations with deterministic models (HUEMUL-PUMA) and probabilistic methods (MCNP) modeling a great number of physical situations of de reactor, which previously have been studied and modeled by members of the Center of Nuclear Engineering of the IPEN, whose results were extensively provided to CNEA. In this paper results of comparison of calculated and experimental results for critical configurations, temperature coefficients, kinetic parameters and fission rates evaluated with probabilistic models spatial distributions are shown. (author)
International Nuclear Information System (INIS)
Etemad, M.A.
1981-04-01
The one dimensional discrete ordinates code ANISN-F was used to calculate the thermal neutron flux distribution in water from a Ra-Be neutron source. The calculations were performed in order to investigate the different possibilities of the code as well as to verify the results of the calculations in terms of comparisons to corresponding experimental data. Two different group cross section libraries were used in the calculations and conclusions were drawn on the adequacy of these libraries for a fixed source type calculation. Furthermore, critically calculations were performed for an infinite homogeneous slab of multiplying material using different angular and spatial approximations. The results of these calculations were then compared to the corresponding results previously obtained at this department by a different method and a different code. (author)
DEFF Research Database (Denmark)
Cismondi, Martin; Michelsen, Michael Locht
2007-01-01
A general strategy for global phase equilibrium calculations (GPEC) in binary mixtures is presented in this work along with specific methods for calculation of the different parts involved. A Newton procedure using composition, temperature and Volume as independent variables is used for calculation...
International Nuclear Information System (INIS)
Lima Barros, M. de.
1982-04-01
The multiplication factors of several systems with low enrichment, 3,5% and 3,2% in the isotope 235 U, aiming at the storage of fuel of ANGRA-I and ANGRA II, through the method of Monte Carlo, by the computacional code KENO-IV and the library of section of cross Hansen - Roach with 16 groups of energy. The method of Monte Carlo is specially suitable to the calculation of the factor of multiplication, because it is one of the most acurate models of solution and allows the description of complex tridimensional systems. Various tests of sensibility of this method have been done in order to present the most convenient way of working with KENO-IV code. The safety on criticality of stores of fissile material of the 'Fabrica de Elementos Combustiveis ', has been analyzed through the method of Monte Carlo. (Author) [pt
International Nuclear Information System (INIS)
Garcia, A.E.; Parkansky, D.G.
1993-01-01
In the Embalse nuclear power plant (CNE), the Regional Overpower Protection System acting on the Shutdown Systems number 1 and number 2 protects the reactor against overpowers in the reactor field for a localized peaking or a power increase in the reactor as a whole. This report summarizes the results of the critical channel power calculation for the time average powers configuration for the 380 reactor field channels. The final purpose of this work is to analyze and eventually modify the detector set points. Other reactor configurations are being analyzed. The report also presents a sensitivity analysis in order to evaluate potential sources of error and uncertainties which could affect the ROP performance. (author)
Energy Technology Data Exchange (ETDEWEB)
Hathout, A M [National Center for Nuclear Safety and Radiation Control, NC-NSRC, Atomic Energy Authority, Cairo (Egypt)
1996-03-01
The narrow resonance approximation is applicable for all low-energy resonances and the heaviest nuclides. It is of great importance in neutron calculations, hence, fertile isotopes do not undergo fission at resonance energies. The effect of overestimating the self shielded group averaged cross-section data for a given resonance nuclide can be fairly serious. In the present work, a detailed study, and derivation of the problem of self-shielding are carried-out through the information of Hansen-roach library which is used for criticality safety analysis. The intermediate neutron flux spectrum is analyzed, using the narrow resonance approximation. The resonance self-shielded values of various cross-sections are determined. 4 figs., 3 tabs.
Validating criticality calculations for spent fuel with 252Cf-source-driven noise measurements
International Nuclear Information System (INIS)
Mihalczo, J.T.; Krass, A.W.; Valentine, T.E.
1992-01-01
The 252 Cf-Source-driven noise analysis method can be used for measuring the subcritical neutron multiplication factor k of arrays of spent light water reactor (LWR) fuel. This type of measurement provides a parameter that is directly related to the criticality state of arrays of LWR fuel. Measurements of this parameter can verify the criticality safety margins of spent LWR fuel configurations and thus could be a means of obtaining the information to justify burnup credit for spent LWR transportation/storage casks. The practicality of a measurement depends on the ability to install the hardware required to perform the measurement. Source chambers containing the 252 Cf at the required source intensity for this application have been constructed and have operated successfully for ∼10 years and can be fabricated to fit into control rod guide tubes of PWR fuel elements. Fission counters especially developed for spent-fuel measurements are available that would allow measurements of a special 3 x 3 spent fuel array and a typical burnup credit rail cask with spent fuel in unborated water. Adding a moderator around these fission counters would allow measurements with the typical burnup credit rail cask with borated water and the special 3 x 3 array with borated water. The recent work of Ficaro on modifying the KENO Va code to calculate by the Monte Carlo method the time sequences of pulses at two detectors near a fissile assembly from the fission chain multiplication process, initiated by a 252 Cf source in the assembly allows a direct computer calculation of the noise analysis data from this measurement method
International Nuclear Information System (INIS)
Hennebach, M.; Schnorrenberg, N.
2008-01-01
Criticality safety assessments are usually performed for fuel assembly models that are as generic as possible to encompass small modifications in geometry that have no impact on criticality. Dealing with different radial enrichment distributions for a fuel assembly type, which is especially important for BWR fuel, poses more of a challenge, since this characteristic is rather obviously influencing the neutronic behaviour of the system. Nevertheless, the large variability of enrichment distributions makes it very desirable and even necessary to treat them in a generalized way, both to keep the criticality safety assessment from becoming too unwieldy and to avoid having to extend it every time a new variation comes up. To be viable, such a generic treatment has to be demonstrably covering, i.e. lead to a higher effective neutron multiplication factor k eff than any of the radial enrichment distributions it represents. Averaging the enrichment evenly over the fuel rods of the assembly is a general and simple approach, and under reactor conditions, it is also a covering assumption: the graded distribution is introduced to achieve a linear power distribution, therefore reducing the enrichment of the better moderated rods at the edge of the assembly. With an even distribution of the average enrichment over all rods, these wellmoderated rods will cause increased fission rates at the assembly edges and a rise in k eff . Since the moderator conditions in a spent nuclear fuel cask differ strongly from a reactor even when considering optimal moderation, the proof that a uniform enrichment distribution is a covering assumption compared with detailed enrichment distributions has to be cask-specific. In this report, a method for making that proof is presented along with results for fuel assemblies from BWR reactors. All results are from three-dimensional Monte Carlo calculations with the SCALE 5.1 code package [1], using a 44-group neutron crosssection library based on ENDF
A case study and critical assessment in calculating power usage effectiveness for a data centre
International Nuclear Information System (INIS)
Brady, Gemma A.; Kapur, Nikil; Summers, Jonathan L.; Thompson, Harvey M.
2013-01-01
Highlights: • A case study PUE calculation is carried out on a data centre by using open source specifications. • The PUE metric does not drive improvements in the efficiencies of IT processes. • The PUE does not fairly represent energy use; an increase in IT load can lead to a decrease in the PUE. • Once a low PUE is achieved, power supply efficiency and IT load have the greatest impact on its value. - Abstract: Metrics commonly used to assess the energy efficiency of data centres are analysed through performing and critiquing a case study calculation of energy efficiency. Specifically, the metric Power Usage Effectiveness (PUE), which has become a de facto standard within the data centre industry, will be assessed. This is achieved by using open source specifications for a data centre in Prineville, Oregon, USA provided by the Open Compute Project launched by the social networking company Facebook. The usefulness of the PUE metric to the IT industry is critically assessed and it is found that whilst it is important for encouraging lower energy consumption in data centres, it does not represent an unambiguous measure of energy efficiency
Status on benchmark testing of CENDL-3
Liu Ping
2002-01-01
CENDL-3, the newest version of China Evaluated Nuclear Data Library has been finished, and distributed for some benchmarks analysis recently. The processing was carried out using the NJOY nuclear data processing code system. The calculations and analysis of benchmarks were done with Monte Carlo code MCNP and reactor lattice code WIMSD5A. The calculated results were compared with the experimental results based on ENDF/B6. In most thermal and fast uranium criticality benchmarks, the calculated k sub e sub f sub f values with CENDL-3 were in good agreements with experimental results. In the plutonium fast cores, the k sub e sub f sub f values were improved significantly with CENDL-3. This is duo to reevaluation of the fission spectrum and elastic angular distributions of sup 2 sup 3 sup 9 Pu and sup 2 sup 4 sup 0 Pu. CENDL-3 underestimated the k sub e sub f sub f values compared with other evaluated data libraries for most spherical or cylindrical assemblies of plutonium or uranium with beryllium
Thermal reactor benchmark tests on JENDL-2
International Nuclear Information System (INIS)
Takano, Hideki; Tsuchihashi, Keichiro; Yamane, Tsuyoshi; Akino, Fujiyoshi; Ishiguro, Yukio; Ido, Masaru.
1983-11-01
A group constant library for the thermal reactor standard nuclear design code system SRAC was produced by using the evaluated nuclear data JENDL-2. Furthermore, the group constants for 235 U were calculated also from ENDF/B-V. Thermal reactor benchmark calculations were performed using the produced group constant library. The selected benchmark cores are two water-moderated lattices (TRX-1 and 2), two heavy water-moderated cores (DCA and ETA-1), two graphite-moderated cores (SHE-8 and 13) and eight critical experiments for critical safety. The effective multiplication factors and lattice cell parameters were calculated and compared with the experimental values. The results are summarized as follows. (1) Effective multiplication factors: The results by JENDL-2 are considerably improved in comparison with ones by ENDF/B-IV. The best agreement is obtained by using JENDL-2 and ENDF/B-V (only 235 U) data. (2) Lattice cell parameters: For the rho 28 (the ratio of epithermal to thermal 238 U captures) and C* (the ratio of 238 U captures to 235 U fissions), the values calculated by JENDL-2 are in good agreement with the experimental values. The rho 28 (the ratio of 238 U to 235 U fissions) are overestimated as found also for the fast reactor benchmarks. The rho 02 (the ratio of epithermal to thermal 232 Th captures) calculated by JENDL-2 or ENDF/B-IV are considerably underestimated. The functions of the SRAC system have been continued to be extended according to the needs of its users. A brief description will be given, in Appendix B, to the extended parts of the SRAC system together with the input specification. (author)
Grafen, M.; Delbeck, S.; Busch, H.; Heise, H. M.; Ostendorf, A.
2018-02-01
Mid-infrared spectroscopy hyphenated with micro-dialysis is an excellent method for monitoring metabolic blood parameters as it enables the concurrent, reagent-free and precise measurement of multiple clinically relevant substances such as glucose, lactate and urea in micro-dialysates of blood or interstitial fluid. For a marketable implementation, quantum cascade lasers (QCL) seem to represent a favourable technology due to their high degree of miniaturization and potentially low production costs. In this work, an external cavity (EC) - QCL-based spectrometer and two Fourier-transform infrared (FTIR) spectrometers were benchmarked with regard to the precision, accuracy and long-term stability needed for the monitoring of critically ill patients. For the tests, ternary aqueous solutions of glucose, lactate and mannitol (the latter for dialysis recovery determination) were measured in custom-made flow-through transmission cells of different pathlengths and analyzed by Partial Least Squares calibration models. It was revealed, that the wavenumber tuning speed of the QCL had a severe impact on the EC-mirror trajectory due to matching the digital-analog-converter step frequency with the mechanical resonance frequency of the mirror actuation. By selecting an appropriate tuning speed, the mirror oscillations acted as a hardware smoothing filter for the significant intensity variations caused by mode hopping. Besides the tuning speed, the effects of averaging over multiple spectra and software smoothing parameters (Savitzky-Golay-filters and FT-smoothing) were investigated. The final settings led to a performance of the QCL-system, which was comparable with a research FTIR-spectrometer and even surpassed the performance of a small FTIR-mini-spectrometer.
International Nuclear Information System (INIS)
Kim, In Young; Choi, Heui Joo; Cho, Dong Geun
2013-01-01
The primary function of any repository is to prevent spreading of dangerous materials into surrounding environment. In the case of high-level radioactive waste repository, radioactive material must be isolated and retarded during sufficient decay time to minimize radiation hazard to human and surrounding environment. Sub-criticality of disposal canister and whole disposal system is minimum requisite to prevent multiplication of radiation hazard. In this study, criticality of disposal canister and DBD system for trans-metal waste is calculated to check compliance of sub-criticality. Preliminary calculation on criticality of conceptual deep borehole disposal system and its canister for trans-metal waste during operational phase is conducted in this study. Calculated criticalities at every temperature are under sub-criticalities and criticalities of canister and DBD system considering temperature are expected to become 0.34932 and 0.37618 approximately. There are obvious limitations in this study. To obtain reliable data, exact elementary composition of each component, system component temperature must be specified and applied, and then proper cross section according to each component temperature must be adopted. However, many assumptions, for example simplified elementary concentration and isothermal component temperature, are adopted in this study. Improvement of these data must be conducted in the future work to progress reliability. And, post closure criticality analyses including geo, thermal, hydro, mechanical, chemical mechanism, especially fissile material re-deposition by precipitation and sorption, must be considered to ascertain criticality safety of DBD system as a future work
Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1.1 with MCNP6
International Nuclear Information System (INIS)
Marck, Steven C. van der
2012-01-01
Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such