Benchmark calculations of sodium fast critical experiments
International Nuclear Information System (INIS)
The high expectations from fast critical experiments impose the additional requirements on reliability of final reconstructed values, obtained in experiments at critical facility. Benchmark calculations of critical experiments are characterized by impossibility of complete experiment reconstruction, the large amounts of input data (dependent and independent) with very different reliability. It should also take into account different sensitivity of the measured and appropriate calculated characteristics to the identical changes of geometry parameters, temperature, and isotopic composition of individual materials. The calculations of critical facility experiments are produced for the benchmark models, generated by the specific reconstructing codes with its features when adjusting model parameters, and using the nuclear data library. The generated benchmark model, providing the agreed calculated and experimental values for one or more neutronic characteristics can lead to considerable differences for other key characteristics. The sensitivity of key neutronic characteristics to the extra steel allocation in the core, and ENDF/B nuclear data sources is performed using a few calculated models of BFS-62-3A and BFS1-97 critical assemblies. The comparative analysis of the calculated effective multiplication factor, spectral indices, sodium void reactivity, and radial fission-rate distributions leads to quite different models, providing the best agreement the calculated and experimental neutronic characteristics. This fact should be considered during the refinement of computational models and code-verification purpose. (author)
MCNP calculations for Russian criticality-safety benchmarks
International Nuclear Information System (INIS)
The current edition of the International Handbook of Evaluated Criticality Safety Benchmark Experiments contains evaluations of 20 critical experiments performed and evaluated by the Institute for Experimental Physics of the Russian Federal Nuclear Center (VNIIEF) at Arzamas-16 and 16 critical experiments performed and evaluated by the Institute for Technical Physics of the Russian Federal Nuclear Center (VNIITF) at Chelyabinsk-70. These fast-spectrum experiments are of particular interest for data testing of ENDF/B-VI because they contain uranium metal systems of intermediate enrichment as well as uranium and plutonium metal systems with reflectors such as graphite, stainless steel, polyethylene, beryllium, and beryllium oxide. This paper presents the first published results for such systems using cross-section libraries based on ENDF/B-VI
Benchmark test of JEF-1 evaluation by calculating fast criticalities
International Nuclear Information System (INIS)
JEF-1 basic evaluation was tested by calculating fast critical experiments using the cross section discrete-ordinates transport code ONEDANT with P/sub 3/S/sub 16/ approximation. In each computation a spherical one dimensional model was used, together with a 174 neutron group VITAMIN-E structured JEF-1 based nuclear data library, generated at EIR with NJOY and TRANSX-CTR. It is found that the JEF-1 evaluation gives accurate results comparable with ENDF/B-V and that eigenvalues agree well within 10 mk whereas reaction rates deviate by up to 10% from the experiment. U-233 total and fission cross sections seem to be underestimated in the JEF-1 evaluation in the fast energy range between 0.1 and 1 MeV. This confirms previous analysis based on diffusion theory with 71 neutron groups, performed by H. Takano and E. Sartori at NEA Data Bank. (author)
OECD/NEA burnup credit calculational criticality benchmark Phase I-B results
International Nuclear Information System (INIS)
In most countries, criticality analysis of LWR fuel stored in racks and casks has assumed that the fuel is fresh with the maximum allowable initial enrichment. This assumption has led to the design of widely spaced and/or highly poisoned storage and transport arrays. If credit is assumed for fuel burnup, initial enrichment limitations can be raised in existing systems, and more compact and economical arrays can be designed. Such reliance on the reduced reactivity of spent fuel for criticality control is referred to as burnup credit. The Burnup Credit Working Group, formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development, has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods agree to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods agree within 11% about the average for all fission products studied. Most deviations are less than 10%, and many are less than 5%. The exceptions are Sm 149, Sm 151, and Gd 155
Energy Technology Data Exchange (ETDEWEB)
Lee, G. S.; Lee, J.; Kim, G. Y.; Woo, S. W. [Korea Inst. of Nuclear Safety, 62 Gwahak-ro, Yuseong-gu, Daejeon, 305-338 (Korea, Republic of)
2012-07-01
The phase II benchmark problem of expert group UACSA includes a configuration of a PWR fuel storage rack and focuses on the uncertainty of criticality from manufacturing tolerance of design parameters such as fuel enrichment, density, diameter, thickness of neutron absorber and structural material, and so on. It provides probability density functions for each design parameter. In this paper, upper limits of k{sub eff} of 95%/95% tolerance with two methods are calculated by sampling design parameters using given probability distributions and compared with the result from traditional approach. (authors)
Energy Technology Data Exchange (ETDEWEB)
Okuno, Hiroshi; Naito, Yoshitaka [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ando, Yoshihira [Toshiba Corp., Kawasaki, Kanagawa (Japan)
2000-09-01
The report describes the final results of Phase IIIA Benchmarks conducted by the Burnup Credit Criticality Calculation Working Group under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD/NEA). The benchmarks are intended to confirm the predictive capability of the current computer code and data library combinations for the neutron multiplication factor (k{sub eff}) of a layer of irradiated BWR fuel assembly array model. In total 22 benchmark problems are proposed for calculations of k{sub eff}. The effects of following parameters are investigated: cooling time, inclusion/exclusion of FP nuclides and axial burnup profile, and inclusion of axial profile of void fraction or constant void fractions during burnup. Axial profiles of fractional fission rates are further requested for five cases out of the 22 problems. Twenty-one sets of results are presented, contributed by 17 institutes from 9 countries. The relative dispersion of k{sub eff} values calculated by the participants from the mean value is almost within the band of {+-}1%{delta}k/k. The deviations from the averaged calculated fission rate profiles are found to be within {+-}5% for most cases. (author)
OECD/NEA Burnup Credit Calculational Criticality Benchmark Phase I-B Results
International Nuclear Information System (INIS)
Burnup credit is an ongoing technical concern for many countries that operate commercial nuclear power reactors. In a multinational cooperative effort to resolve burnup credit issues, a Burnup Credit Working Group has been formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development. This working group has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide, and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods are in agreement to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods are within 11% agreement about the average for all fission products studied. Furthermore, most deviations are less than 10%, and many are less than 5%. The exceptions are 149Sm, 151Sm, and 155Gd
International Nuclear Information System (INIS)
The MUSE project, carried out within the European fifth Framework Program, focuses on the coupling of a sub-critical reactor core with an external neutron source. In the first stage of the project a benchmark has been defined in order to define a reference calculational route, which is able to accurately predict the neutronics behavior in an accelerator driven system. Benchmark calculations will be carried out by several members of the project and the results will be compared, also with experimental results. The contribution of NRG to the project consists of the benchmark calculations and additional work that focuses on the calculation of 3D distributions of reaction yields. This paper discusses the non-conventional methods used to perform the benchmark calculations, including the 3D reaction yield distributions. The 3D distributions calculated for the sub-critical core will be Shown and discussed. With the ORANGE-extension to MCNP it is possible to tally 3D distributions, without adding extra cells and surfaces to the geometry and without a significant slowing down of the calculation. These are major advantages when compared to the conventional way of tallying in the MCNP-code. The distributions show details that can be understood in terms of the expected neutron behavior in the different parts of the geometry. For instance, the results show that: 1) a large number of fast neutrons is found in the fuel regions, 2) the reflector region shows an increased number of slower neutrons and 3) the reaction yield in the shielding region declines steeply. The extension therefore seems a useful tool in generating a better understanding of the behavior of neutrons throughout large and complex geometries like accelerator driven systems, but we also expect to use the extension in a variety of different fields. (authors)
International Nuclear Information System (INIS)
A method for classifying benchmark results of criticality calculations according to similarity was proposed in this paper. After formulation of the method utilizing correlation coefficients, it was applied to burnup credit criticality benchmarks Phase III-A and II-A, which were conducted by the Expert Group on Burnup Credit Criticality Safety under auspices of the Nuclear Energy Agency of the Organisation for Economic Cooperation and Development (OECD/NEA). Phase III-A benchmark was a series of criticality calculations for irradiated Boiling Water Reactor (BWR) fuel assemblies, whereas Phase II-A benchmark was a suite of criticality calculations for irradiated Pressurized Water Reactor (PWR) fuel pins. These benchmark problems and their results were summarized. The correlation coefficients were calculated and sets of benchmark calculation results were classified according to the criterion that the values of the correlation coefficients were no less than 0.15 for Phase III-A and 0.10 for Phase II-A benchmarks. When a couple of benchmark calculation results belonged to the same group, one calculation result was found predictable from the other. An example was shown for each of the Benchmarks. While the evaluated nuclear data seemed the main factor for the classification, further investigations were required for finding other factors. (author)
Benchmark calculation of SCALE-PC 4.3 CSAS6 module and burnup credit criticality analysis
Energy Technology Data Exchange (ETDEWEB)
Shin, Hee Sung; Ro, Seong Gy; Shin, Young Joon; Kim, Ik Soo [Korea Atomic Energy Research Institute, Taejon (Korea)
1998-12-01
Calculation biases of SCALE-PC CSAS6 module for PWR spent fuel, metallized spent fuel and solution of nuclear materials have been determined on the basis of the benchmark to be 0.01100, 0.02650 and 0.00997, respectively. With the aid of the code system, nuclear criticality safety analysis for the spent fuel storage pool has been carried out to determine the minimum burnup of spent fuel required for safe storage. The criticality safety analysis is performed using three types of isotopic composition of spent fuel: ORIGEN2-calculated isotopic compositions; the conservative inventory obtained from the multiplication of ORIGEN2-calculated isotopic compositions by isotopic correction factors; the conservative inventory of only U, Pu and {sup 241}Am. The results show that the minimum burnup for three cases are 990,6190 and 7270 MWd/tU, respectively in the case of 5.0 wt% initial enriched spent fuel. (author). 74 refs., 68 figs., 35 tabs.
Energy Technology Data Exchange (ETDEWEB)
Okuno, Hiroshi; Naito, Yoshitaka; Suyama, Kenya [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
2002-02-01
The report describes the final results of the Phase IIIB Benchmark conducted by the Expert Group on Burnup Credit Criticality Safety under the auspices of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD). The Benchmark was intended to compare the predictability of current computer code and data library combinations for the atomic number densities of an irradiated PWR fuel assembly model. The fuel assembly was irradiated under specific power of 25.6 MW/tHM up to 40 GWd/tHM and cooled for five years. The void fraction was assumed to be uniform throughout the channel box and constant, at 0, 40 and 70%, during burnup. In total, 16 results were submitted from 13 institutes of 7 countries. The calculated atomic number densities of 12 actinides and 20 fission product nuclides were found to be for the most part within a range of {+-}10% relative to the average, although some results, esp. {sup 155}Eu and gadolinium isotopes, exceeded the band, which will require further investigation. Pin-wise burnup results agreed well among the participants. The results in the infinite neutron multiplication factor k{sub {infinity}} also accorded well with each other for void fractions of 0 and 40%; however some results deviated from the averaged value noticeably for the void fraction of 70%. (author)
Benchmark calculations for EGS5
International Nuclear Information System (INIS)
In the past few years, EGS4 has undergone an extensive upgrade to EGS5, in particularly in the areas of low-energy electron physics, low-energy photon physics, PEGS cross section generation, and the coding from Mortran to Fortran programming. Benchmark calculations have been made to assure the accuracy, reliability and high quality of the EGS5 code system. This study reports three benchmark examples that show the successful upgrade from EGS4 to EGS5 based on the excellent agreements among EGS4, EGS5 and measurements. The first benchmark example is the 1969 Crannell Experiment to measure the three-dimensional distribution of energy deposition for 1-GeV electrons shower in water and aluminum tanks. The second example is the 1995 Compton-scattered spectra measurements for 20-40 keV, linearly polarized photon by Namito et. al., in KEK, which was a main part of the low-energy photon expansion work for both EGS4 and EGS5. The third example is the 1986 heterogeneity benchmark experiment by Shortt et. al., who used a monoenergetic 20-MeV electron beam to hit the front face of a water tank containing both air and aluminum cylinders and measured spatial depth dose distribution using a small solid-state detector. (author)
International Nuclear Information System (INIS)
The DECD/NEA Expert Group on Burn-up Credit was established in 1991 to address scientific and technical issues connected with the use of burn-up credit in nuclear fuel cycle operations. Following the completion of six benchmark exercises with uranium oxide (UOX) fuels irradiated in pressurised water reactors (PWRs) and boiling water reactors (BWRs), the present report concerns mixed uranium and plutonium oxide (MOX) fuels irradiated in PWRs. The exercises consisted of inventory calculations of MOX fuels for two initial plutonium compositions. The depletion calculations were carried out using three representations of the MOX assemblies and their interface with UOX assemblies. This enabled the investigation of the spatial and spectral effects during the irradiation of the MOX fuels. (author)
KENO-IV code benchmark calculation, (6)
International Nuclear Information System (INIS)
A series of benchmark tests has been undertaken in JAERI in order to examine the capability of JAERI's criticality safety evaluation system consisting of the Monte Carlo calculation code KENO-IV and the newly developed multigroup constants library MGCL. The present report describes the results of a benchmark test using criticality experiments about Plutonium fuel in various shape. In all, 33 cases of experiments have been calculated for Pu(NO3)4 aqueous solution, Pu metal or PuO2-polystyrene compact in various shape (sphere, cylinder, rectangular parallelepiped). The effective multiplication factors calculated for the 33 cases distribute widely between 0.955 and 1.045 due to wide range of system variables. (author)
KENO-IV code benchmark calculation, (4)
International Nuclear Information System (INIS)
A series of benchmark tests has been undertaken in JAERI in order to examine the capability of JAERI's criticality safety evaluation system consisting of the Monte Carlo calculation code KENO-IV and the newly developed multi-group constants library MGCL. The present paper describes the results of a test using criticality experiments about slab-cylinder system of uranium nitrate solution. In all, 128 cases of experiments have been calculated for the slab-cylinder configuration with and without plexiglass reflector, having the various critical parameters such as the number of cylinders and height of the uranium nitrate solution. It is shown among several important results that the code and library gives a fairly good multiplication factor, that is, k sub(eff) -- 1.0 for heavily reflected cases, whereas k sub(eff) -- 0.91 for the unreflected ones. This suggests the necessity of more advanced treatment of the criticality calculation for the system where neutrons can easily leak out during slowing down process. (author)
One dimensional benchmark calculations using diffusion theory
International Nuclear Information System (INIS)
This is a comparative study by using different one dimensional diffusion codes which are available at our Nuclear Engineering Department. Some modifications have been made in the used codes to fit the problems. One of the codes, DIFFUSE, solves the neutron diffusion equation in slab, cylindrical and spherical geometries by using 'Forward elimination- Backward substitution' technique. DIFFUSE code calculates criticality, critical dimensions and critical material concentrations and adjoint fluxes as well. It is used for the space and energy dependent neutron flux distribution. The whole scattering matrix can be used if desired. Normalisation of the relative flux distributions to the reactor power, plotting of the flux distributions and leakage terms for the other two dimensions have been added. Some modifications also have been made for the code output. Two Benchmark problems have been calculated with the modified version and the results are compared with BBD code which is available at our department and uses same techniques of calculation. Agreements are quite good in results such as k-eff and the flux distributions for the two cases studies. (author)
Benchmark calculations for fusion blanket development
International Nuclear Information System (INIS)
Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li17Pb83 and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the tritium breeding ratio to group structure and weighting spectrum increases as the thickness and Li enrichment decrease with up to 20% discrepancies for thin natural Li17Pb83 blankets. (author)
The MCNP6 Analytic Criticality Benchmark Suite
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group
2016-06-16
Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.
BEGAFIP. Programming service, development and benchmark calculations
International Nuclear Information System (INIS)
This report summarizes improvements to BEGAFIP (the Swedish equivalent to the Oak Ridge computer code ORIGEN). The improvements are: addition of a subroutine making it possible to calculate neutron sources, exchange of fission yields and branching ratios in the data library to those published by Meek and Rider in 1978. In addition, BENCHMARK-calculations have been made with BEGAFIP as well as with ORIGEN regarding the build-up of actinides for a fuel burnup of 33 MWd/kg U. The results were compared to those arrived upon from the more sophisticated code CASMO. (author)
COVE 2A Benchmarking calculations using NORIA
International Nuclear Information System (INIS)
Six steady-state and six transient benchmarking calculations have been performed, using the finite element code NORIA, to simulate one-dimensional infiltration into Yucca Mountain. These calculations were made to support the code verification (COVE 2A) activity for the Yucca Mountain Site Characterization Project. COVE 2A evaluates the usefulness of numerical codes for analyzing the hydrology of the potential Yucca Mountain site. Numerical solutions for all cases were found to be stable. As expected, the difficulties and computer-time requirements associated with obtaining solutions increased with infiltration rate. 10 refs., 128 figs., 5 tabs
International Nuclear Information System (INIS)
The OECD/NEA Expert Group on Burn-up Credit was established in 1991 to address scientific and technical issues connected with the use of burn-up credit in nuclear fuel cycle operations. Following the completion of six benchmark exercises with uranium oxide fuels irradiated in pressurised water reactors (PWRs) and boiling water reactors (BWRs), the present report concerns mixed uranium and plutonium oxide (MOX) fuels irradiated in PWRs. The report summarises and analyses the solutions to the specified exercises provided by 37 contributors from 10 countries. The exercises were based upon the calculation of infinite PWR fuel pin cell reactivity for fresh and irradiated MOX fuels with various MOX compositions, burn-ups and cooling times. In addition, several representations of the MOX fuel assembly were tested in order to check various levels of approximations commonly used in reactor physics calculations. (authors)
Compilation report of VHTRC temperature coefficient benchmark calculations
Energy Technology Data Exchange (ETDEWEB)
Yasuda, Hideshi; Yamane, Tsuyoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1995-11-01
A calculational benchmark problem has been proposed by JAERI to an IAEA Coordinated Research Program, `Verification of Safety Related Neutronic Calculation for Low-enriched Gas-cooled Reactors` to investigate the accuracy of calculation results obtained by using codes of the participating countries. This benchmark is made on the basis of assembly heating experiments at a pin-in block type critical assembly, VHTRC. Requested calculation items are the cell parameters, effective multiplication factor, temperature coefficient of reactivity, reaction rates, fission rate distribution, etc. Seven institutions from five countries have joined the benchmark works. Calculation results are summarized in this report with some remarks by the authors. Each institute analyzed the problem by applying the calculation code system which was prepared for the HTGR development of individual country. The values of the most important parameter, k{sub eff}, by all institutes showed good agreement with each other and with the experimental ones within 1%. The temperature coefficient agreed within 13%. The values of several cell parameters calculated by several institutes did not agree with the other`s ones. It will be necessary to check the calculation conditions again for getting better agreement. (J.P.N.).
Energy Technology Data Exchange (ETDEWEB)
Kuroishi, Takeshi; Hoang, Anh Tuan; Nomura, Yasushi; Okuno, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
2003-03-01
The reactivity effect of the asymmetry of axial burnup profile in burnup credit criticality safety is studied for a realistic PWR spent fuel transport cask proposed in the current OECD/NEA Phase II-C benchmark problem. The axial burnup profiles are simulated in 21 material zones based on in-core flux measurements varying from strong asymmetry to more or less no asymmetry. Criticality calculations in a 3-D model have been performed using the continuous energy Monte Carlo code MCNP-4B2 and the nuclear data library JENDL-3.2. Calculation conditions are determined with consideration of the axial fission source convergence. Calculations are carried out not only for cases proposed in the benchmark but also for additional cases assuming symmetric burnup profile. The actinide-only approach supposed for first domestic introduction of burnup credit into criticality evaluation is also considered in addition to the actinide plus fission product approach adopted in the benchmark. The calculated results show that k{sub eff} and the end effect increase almost linearly with increasing burnup axial offset that is defined as one of typical parameters showing the intensity of axial burnup asymmetry. The end effect is more sensitive to the asymmetry of burnup profile for the higher burnup. For an axially distributed burnup, the axial fission source distribution becomes strongly asymmetric as its peak shifts toward the top end of the fuel's active zone where the local burnup is less than that of the bottom end. The peak of fission source distribution becomes higher with the increase of either the asymmetry of burnup profile or the assembly-averaged burnup. The conservatism of the assumption of uniform axial burnup based on the actinide-only approach is estimated quantitatively in comparison with the k{sub eff} result calculated with experiment-based strongest asymmetric axial burnup profile with the actinide plus fission product approach. (author)
Energy Technology Data Exchange (ETDEWEB)
J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Sartori
2009-09-01
High-quality integral benchmark experiments have always been a priority for criticality safety. However, interest in integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of future criticality safety needs to support next generation reactor and advanced fuel cycle concepts. The importance of drawing upon existing benchmark data is becoming more apparent because of dwindling availability of critical facilities worldwide and the high cost of performing new experiments. Integral benchmark data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the International Handbook of Reactor Physics Benchmark Experiments are widely used. Benchmark data have been added to these two handbooks since the last Nuclear Criticality Safety Division Topical Meeting in Knoxville, Tennessee (September 2005). This paper highlights these additions.
Benchmark analysis of KRITZ-2 critical experiments
International Nuclear Information System (INIS)
In the KRITZ-2 critical experiments, criticality and pin power distributions were measured at room temperature and high temperature (about 245 degC) for three different cores (KRITZ-2:1, KRITZ-2:13, KRITZ-2:19) loading slightly enriched UO2 or MOX fuels. Recently, international benchmark problems were provided by ORNL and OECD/NEA based on the KRITZ-2 experimental data. The published experimental data for the system with slightly enriched fuels at high temperature are rare in the world and they are valuable for nuclear data testing. Thus, the benchmark analysis was carried out with a continuous-energy Monte Carlo code MVP and its four nuclear data libraries based on JENDL-3.2, JENDL-3.3, JEF-2.2 and ENDF/B-VI.8. As a result, fairly good agreements with the experimental data were obtained with any libraries for the pin power distributions. However, the JENDL-3.3 and ENDF/B-VI.8 give under-prediction of criticality and too negative isothermal temperature coefficients for slightly enriched UO2 cores, although the older nuclear data JENDL-3.2 and JEF-2.2 give rather good agreements with the experimental data. From the detailed study with an infinite unit cell model, it was found that the differences among the results with different libraries are mainly due to the different fission cross section of U-235 in the energy range below 1.0 eV. (author)
Energy Technology Data Exchange (ETDEWEB)
Kozier, K.S. [Atomic Energy of Canada Limited, Chalk River Laboratories, Chalk River, Ontario (Canada)
2008-07-01
Different evaluated (n,d) energy-angle elastic scattering distributions produce k-effective differences in MCNP5 simulations of critical experiments involving heavy water (D{sub 2}O) of sufficient magnitude to suggest a need for new (n,d) scattering measurements and/or distributions derived from modern theoretical nuclear models, especially at neutron energies below a few MeV. The present work focuses on the small reactivity change of < 1 mk that is observed in the MCNP5 D{sub 2}O coolant-void-reactivity calculation bias for simulations of two pairs of critical experiments performed in the ZED-2 reactor at the Chalk River Laboratories when different nuclear data libraries are used for deuterium. The deuterium data libraries tested include Endf/B-VII.0, Endf/B-VI.4, JENDL-3.3 and a new evaluation, labelled Bonn-B, which is based on recent theoretical nuclear-model calculations. Comparison calculations were also performed for a simplified, two-region, spherical model having an inner, 250-cm radius, homogeneous sphere of UO{sub 2}, without and with deuterium, and an outer 20-cm-thick deuterium reflector. A notable observation from this work is the reduction of about 0.4 mk in the MCNP5 ZED-2 CVR calculation bias that is obtained when the O-in-UO{sub 2} thermal scattering data comes from Endf-B-VII.0. (author)
Status of the international criticality safety benchmark evaluation project (ICSBEP)
International Nuclear Information System (INIS)
Since ICNC'99, four new editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments have been published. The number of benchmark specifications in the Handbook has grown from 2157 in 1999 to 3073 in 2003, an increase of nearly 1000 specifications. These benchmarks are used to validate neutronics codes and nuclear cross-section data. Twenty evaluations representing 192 benchmark specifications were added to the Handbook in 2003. The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) is provided in this paper along with a summary of the newly added benchmark specifications that appear in the 2003 Edition of the Handbook. (author)
Standard Guide for Benchmark Testing of Light Water Reactor Calculations
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...
International Nuclear Information System (INIS)
In year 2008 the Atomic Energy National Commission (CNEA) of Argentina, and the Brazilian Institute of Energetic and Nuclear Research (IPEN), under the frame of Nuclear Energy Argentine Brazilian Agreement (COBEN), among many others, included the project “Validation and Verification of Calculation Methods used for Research and Experimental Reactors. At this time, it was established that the validation was to be performed with models implemented in the deterministic codes HUEMUL and PUMA (cell and reactor codes) developed by CNEA and those ones implemented in MCNP by CNEA and IPEN. The necessary data for these validations would correspond to theoretical-experimental reference cases in the research reactor IPEN/MB-01 located in São Paulo, Brazil. The staff of the group Reactor and Nuclear Power Studies (SERC) of CNEA, from the argentine side, performed calculations with deterministic models (HUEMUL-PUMA) and probabilistic methods (MCNP) modeling a great number of physical situations of de reactor, which previously have been studied and modeled by members of the Center of Nuclear Engineering of the IPEN, whose results were extensively provided to CNEA. In this paper results for critical configurations are shown. (author)
42 CFR 422.258 - Calculation of benchmarks.
2010-10-01
... 42 Public Health 3 2010-10-01 2010-10-01 false Calculation of benchmarks. 422.258 Section 422.258... and Plan Approval § 422.258 Calculation of benchmarks. (a) The term “MA area-specific non-drug monthly benchmark amount” means, for a month in a year: (1) For MA local plans with service areas entirely within...
TRIGA Mark II Criticality Benchmark Experiment with Burned Fuel
International Nuclear Information System (INIS)
The experimental results of criticality benchmark experiments performed at the Jozef Stefan Institute TRIGA Mark II reactor are presented. The experiments were performed with partly burned fuel in two compact and uniform core configurations in the same arrangements as were used in the fresh fuel criticality benchmark experiment performed in 1991. In the experiments, both core configurations contained only 12 wt% U-ZrH fuel with 20% enriched uranium. The first experimental core contained 43 fuel elements with average burnup of 1.22 MWd or 2.8% 235U burned. The last experimental core configuration was composed of 48 fuel elements with average burnup of 1.15 MWd or 2.6% 235U burned. The experimental determination of keff for both core configurations, one subcritical and one critical, are presented. Burnup for all fuel elements was calculated in two-dimensional four-group diffusion approximation using the TRIGLAV code. The burnup of several fuel elements was measured also by the reactivity method
Criticality benchmarking of ANET Monte Carlo code
International Nuclear Information System (INIS)
In this work the new Monte Carlo code ANET is tested on criticality calculations. ANET is developed based on the high energy physics code GEANT of CERN and aims at progressively satisfying several requirements regarding both simulations of GEN II/III reactors, as well as of innovative nuclear reactor designs such as the Accelerator Driven Systems (ADSs). Here ANET is applied on three different nuclear configurations, including a subcritical assembly, a Material Testing Reactor and the conceptual configuration of an ADS. In the first case, calculation of the effective multiplication factor (keff) are performed for the Training Nuclear Reactor of the Aristotle University of Thessaloniki, while in the second case keff is computed for the fresh fueled core of the Portuguese research reactor (RPJ) just after its conversion to Low Enriched Uranium, considering the control rods at the position that renders the reactor critical. In both cases ANET computations are compared with corresponding results obtained by three different well established codes, including both deterministic (XSDRNPM/CITATION) and Monte Carlo (TRIPOLI, MCNP). In the RPI case, keff computations are also compared with observations during the reactor core commissioning since the control rods are considered at criticality position. The above verification studies show ANET to produce reasonable results since they are satisfactorily compared with other models as well as with observations. For the third case (ADS), preliminary ANET computations of keff for various intensities of the proton beam are presented, showing also a reasonable code performance concerning both the order of magnitude and the relative variation of the computed parameter. (author)
MOx benchmark calculations by deterministic and Monte Carlo codes
International Nuclear Information System (INIS)
Highlights: ► MOx based depletion calculation. ► Methodology to create continuous energy pseudo cross section for lump of minor fission products. ► Mass inventory comparison between deterministic and Monte Carlo codes. ► Higher deviation was found for several isotopes. - Abstract: A depletion calculation benchmark devoted to MOx fuel is an ongoing objective of the OECD/NEA WPRS following the study of depletion calculation concerning UOx fuels. The objective of the proposed benchmark is to compare existing depletion calculations obtained with various codes and data libraries applied to fuel and back-end cycle configurations. In the present work the deterministic code NEWT/ORIGEN-S of the SCALE6 codes package and the Monte Carlo based code MONTEBURNS2.0 were used to calculate the masses of inventory isotopes. The methodology to apply the MONTEBURNS2.0 to this benchmark is also presented. Then the results from both code were compared.
Criticality calculations on BARC parallel processor- ANUPAM
International Nuclear Information System (INIS)
Parallel processing offers an increase in computational speed beyond the technological limitations of single processor systems. BARC has recently developed a parallel processing system (ANUPAM) based Multiple Instruction Multiple Data (MIMD) distributed memory architecture. In the work reported here, the sequential version of Monte Carlo code MONALI is modified to work on the ANUPAM for criticality calculations. The problem of random number generation in a parallel environment is handled using leapfrog technique. The code is modified to use variable number of slave processors. The parallel version of MONALI is used to calculate multiplication factor, fluxes and absorptions in one of the 8x8 fuel assemblies of IAEA BWR benchmark in 69 groups. To compare gain in execution time, the benchmark is also solved on LANDMARK and ND-570 systems (both serial) using the sequential version of the code. Speedup and efficiencies achieved on varying the number of slave processors are encouraging. (author). 5 refs., 1 tab
Full CI benchmark calculations on CH3
Bauschlicher, Charles W., Jr.; Taylor, Peter R.
1987-01-01
Full CI calculations have been performed on the CH3 radical. The full CI results are compared to those obtained using CASSCF/multireference CI and coupled-pair functional methods, both at the equilibrium CH distance and at geometries with the three CH bonds extended. In general, the performance of the approximate methods is similar to that observed in calculations on other molecules in which one or two bonds were stretched.
The ORSphere Benchmark Evaluation and Its Potential Impact on Nuclear Criticality Safety
Energy Technology Data Exchange (ETDEWEB)
John D. Bess; Margaret A. Marshall; J. Blair Briggs
2013-10-01
In the early 1970’s, critical experiments using an unreflected metal sphere of highly enriched uranium (HEU) were performed with the focus to provide a “very accurate description…as an ideal benchmark for calculational methods and cross-section data files.” Two near-critical configurations of the Oak Ridge Sphere (ORSphere) were evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook). The results from those benchmark experiments were then compared with additional unmoderated and unreflected HEU metal benchmark experiment configurations currently found in the ICSBEP Handbook. For basic geometries (spheres, cylinders, and slabs) the eigenvalues calculated using MCNP5 and ENDF/B-VII.0 were within 3 of their respective benchmark values. There appears to be generally a good agreement between calculated and benchmark values for spherical and slab geometry systems. Cylindrical geometry configurations tended to calculate low, including more complex bare HEU metal systems containing cylinders. The ORSphere experiments do not calculate within their 1s uncertainty and there is a possibility that the effect of the measured uncertainties for the GODIVA I benchmark may need reevaluated. There is significant scatter in the calculations for the highly-correlated ORCEF cylinder experiments, which are constructed from close-fitting HEU discs and annuli. Selection of a nuclear data library can have a larger impact on calculated eigenvalue results than the variation found within calculations of a given experimental series, such as the ORCEF cylinders, using a single nuclear data set.
Benchmarking calculations of excitonic couplings between bacteriochlorophylls
Kenny, Elise P
2015-01-01
Excitonic couplings between (bacterio)chlorophyll molecules are necessary for simulating energy transport in photosynthetic complexes. Many techniques for calculating the couplings are in use, from the simple (but inaccurate) point-dipole approximation to fully quantum-chemical methods. We compared several approximations to determine their range of applicability, noting that the propagation of experimental uncertainties poses a fundamental limit on the achievable accuracy. In particular, the uncertainty in crystallographic coordinates yields an uncertainty of about 20% in the calculated couplings. Because quantum-chemical corrections are smaller than 20% in most biologically relevant cases, their considerable computational cost is rarely justified. We therefore recommend the electrostatic TrEsp method across the entire range of molecular separations and orientations because its cost is minimal and it generally agrees with quantum-chemical calculations to better than the geometric uncertainty. We also caution ...
Critical power prediction by CATHARE2 of the OECD/NRC BFBT benchmark
Energy Technology Data Exchange (ETDEWEB)
Lutsanych, Sergii, E-mail: s.lutsanych@ing.unipi.it [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, 56122, San Piero a Grado, Pisa (Italy); Sabotinov, Luben, E-mail: luben.sabotinov@irsn.fr [Institut for Radiological Protection and Nuclear Safety (IRSN), 31 avenue de la Division Leclerc, 92262 Fontenay-aux-Roses (France); D’Auria, Francesco, E-mail: francesco.dauria@dimnp.unipi.it [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, 56122, San Piero a Grado, Pisa (Italy)
2015-03-15
Highlights: • We used CATHARE code to calculate the critical power exercises of the OECD/NRC BFBT benchmark. • We considered both steady-state and transient critical power tests of the benchmark. • We used both the 1D and 3D features of the CATHARE code to simulate the experiments. • Acceptable prediction of the critical power and its location in the bundle is obtained using appropriate modelling. - Abstract: This paper presents an application of the French best estimate thermal-hydraulic code CATHARE 2 to calculate the critical power and departure from nucleate boiling (DNB) exercises of the International OECD/NRC BWR Fuel Bundle Test (BFBT) benchmark. The assessment activity is performed comparing the code calculation results with available in the framework of the benchmark experimental data from Japanese Nuclear Power Engineering Corporation (NUPEC). Two-phase flow calculations on prediction of the critical power have been carried out both in steady state and transient cases, using one-dimensional and three-dimensional modelling. Results of the steady-state critical power tests calculation have shown the ability of CATHARE code to predict reasonably the critical power and its location, using appropriate modelling.
Energy Technology Data Exchange (ETDEWEB)
J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama
2008-09-01
Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.
Benchmark Calculations For A VVER-1000 Assembly Using SRAC
International Nuclear Information System (INIS)
This work presents the neutronic calculation results of a VVER-1000 assembly using SRAC with 107 energy groups in comparison with the benchmark values in the OECD/NEA report. The main neutronic characteristics which were calculated in this comparison include infinite multiplication factors (k-inf), nuclide densities as the function of burnup and pin-wise power distribution. Calculations were conducted with various conditions of fuel, coolant and boron content in coolant. (author)
Providing Nuclear Criticality Safety Analysis Education through Benchmark Experiment Evaluation
International Nuclear Information System (INIS)
One of the challenges that today's new workforce of nuclear criticality safety engineers face is the opportunity to provide assessment of nuclear systems and establish safety guidelines without having received significant experience or hands-on training prior to graduation. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and/or the International Reactor Physics Experiment Evaluation Project (IRPhEP) provides students and young professionals the opportunity to gain experience and enhance critical engineering skills.
Providing Nuclear Criticality Safety Analysis Education through Benchmark Experiment Evaluation
Energy Technology Data Exchange (ETDEWEB)
John D. Bess; J. Blair Briggs; David W. Nigg
2009-11-01
One of the challenges that today's new workforce of nuclear criticality safety engineers face is the opportunity to provide assessment of nuclear systems and establish safety guidelines without having received significant experience or hands-on training prior to graduation. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and/or the International Reactor Physics Experiment Evaluation Project (IRPhEP) provides students and young professionals the opportunity to gain experience and enhance critical engineering skills.
Benchmark density functional theory calculations for nanoscale conductance
DEFF Research Database (Denmark)
Strange, Mikkel; Bækgaard, Iben Sig Buur; Thygesen, Kristian Sommer;
2008-01-01
We present a set of benchmark calculations for the Kohn-Sham elastic transmission function of five representative single-molecule junctions. The transmission functions are calculated using two different density functional theory methods, namely an ultrasoft pseudopotential plane-wave code...... a systematic downshift of the SIESTA transmission functions relative to the plane-wave results. The effect diminishes as the atomic orbital basis is enlarged; however, the convergence can be rather slow....
Criticality safety benchmark evaluation project: Recovering the past
Energy Technology Data Exchange (ETDEWEB)
Trumble, E.F.
1997-06-01
A very brief summary of the Criticality Safety Benchmark Evaluation Project of the Westinghouse Savannah River Company is provided in this paper. The purpose of the project is to provide a source of evaluated criticality safety experiments in an easily usable format. Another project goal is to search for any experiments that may have been lost or contain discrepancies, and to determine if they can be used. Results of evaluated experiments are being published as US DOE handbooks.
Benchmarking evaluation for criticality analysis of high density spent fuel storge rack
Energy Technology Data Exchange (ETDEWEB)
Yun, J. H.; Jeon, J. K.; Ko, D. J.; Ha, J. H.; Song, M. J.; Kim, B. T.; Jo, H. S. [Korea Nuclear Environment Technology Institute, Taejon (Korea, Republic of)
2000-05-01
In order to evaluate criticality of spent fuel storage pool in Ulchin Unit 2 under normal operation, a series of benchmark calculations were carried out using a CSAS module of SCALE 4.4 along with CASMO- 3 computer code. Through the benchmark calculations for the criticality computer codes, bias and uncertainties of the computer codes were evaluated. We can take 0.00656 of bias result for CSAS (KENO-V .a.) of SCALE system and its uncertainty was calculated as 0.00731 with a 95% probability at the 95% of confidence level. Criticality evaluation results for spent fuel storage pool of Ulchin Unit 2 using SCALE system had a very similar trend compared with CASMO-3 results.
AGING FACILITY CRITICALITY SAFETY CALCULATIONS
International Nuclear Information System (INIS)
The purpose of this design calculation is to revise and update the previous criticality calculation for the Aging Facility (documented in BSC 2004a). This design calculation will also demonstrate and ensure that the storage and aging operations to be performed in the Aging Facility meet the criticality safety design criteria in the ''Project Design Criteria Document'' (Doraswamy 2004, Section 4.9.2.2), and the functional nuclear criticality safety requirement described in the ''SNF Aging System Description Document'' (BSC [Bechtel SAIC Company] 2004f, p. 3-12). The scope of this design calculation covers the systems and processes for aging commercial spent nuclear fuel (SNF) and staging Department of Energy (DOE) SNF/High-Level Waste (HLW) prior to its placement in the final waste package (WP) (BSC 2004f, p. 1-1). Aging commercial SNF is a thermal management strategy, while staging DOE SNF/HLW will make loading of WPs more efficient (note that aging DOE SNF/HLW is not needed since these wastes are not expected to exceed the thermal limits form emplacement) (BSC 2004f, p. 1-2). The description of the changes in this revised document is as follows: (1) Include DOE SNF/HLW in addition to commercial SNF per the current ''SNF Aging System Description Document'' (BSC 2004f). (2) Update the evaluation of Category 1 and 2 event sequences for the Aging Facility as identified in the ''Categorization of Event Sequences for License Application'' (BSC 2004c, Section 7). (3) Further evaluate the design and criticality controls required for a storage/aging cask, referred to as MGR Site-specific Cask (MSC), to accommodate commercial fuel outside the content specification in the Certificate of Compliance for the existing NRC-certified storage casks. In addition, evaluate the design required for the MSC that will accommodate DOE SNF/HLW. This design calculation will achieve the objective of providing the criticality safety results to support the preliminary design of the Aging
Canister Transfer Facility Criticality Calculations
Energy Technology Data Exchange (ETDEWEB)
J.E. Monroe-Rammsy
2000-10-13
The objective of this calculation is to evaluate the criticality risk in the surface facility for design basis events (DBE) involving Department of Energy (DOE) Spent Nuclear Fuel (SNF) standardized canisters (Civilian Radioactive Waste Management System [CRWMS] Management and Operating Contractor [M&O] 2000a). Since some of the canisters will be stored in the surface facility before they are loaded in the waste package (WP), this calculation supports the demonstration of concept viability related to the Surface Facility environment. The scope of this calculation is limited to the consideration of three DOE SNF fuels, specifically Enrico Fermi SNF, Training Research Isotope General Atomic (TRIGA) SNF, and Mixed Oxide (MOX) Fast Flux Test Facility (FFTF) SNF.
Muse-4 benchmark calculations using MCNP-4C and different nuclear data libraries
International Nuclear Information System (INIS)
Current calculation methods and nuclear data are well validated for conventional nuclear reactor systems. However there is a further need for validating the computational tools and the nuclear data for ADS applications. The OECD/NEA, in co-operation with CIEMAT (Spain) and CEA (France), therefore launched a benchmark based on the MUSE-4 experiments being carried out at Cadarache, France, to simulate the neutronics of a source-driven sub-critical system. This paper summarises the calculated results of the MUSE-4 benchmark obtained from the Monte Carlo code MCNP (Version 4Ca) using different nuclear data evaluations, and shows the sensitivity of the requested results with regard to the nuclear data used. All the calculated results will be compared against measured data after the completion of the experiments foreseen for the end of 2003. (author)
Criticality Benchmark Analysis of the HTTR Annular Startup Core Configurations
International Nuclear Information System (INIS)
One of the high priority benchmarking activities for corroborating the Next Generation Nuclear Plant (NGNP) Project and Very High Temperature Reactor (VHTR) Program is evaluation of Japan's existing High Temperature Engineering Test Reactor (HTTR). The HTTR is a 30 MWt engineering test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. A large amount of critical reactor physics data is available for validation efforts of High Temperature Gas-cooled Reactors (HTGRs). Previous international reactor physics benchmarking activities provided a collation of mixed results that inaccurately predicted actual experimental performance.1 Reevaluations were performed by the Japanese to reduce the discrepancy between actual and computationally-determined critical configurations.2-3 Current efforts at the Idaho National Laboratory (INL) involve development of reactor physics benchmark models in conjunction with the International Reactor Physics Experiment Evaluation Project (IRPhEP) for use with verification and validation methods in the VHTR Program. Annular cores demonstrate inherent safety characteristics that are of interest in developing future HTGRs.
Criticality Benchmark Analysis of the HTTR Annular Startup Core Configurations
Energy Technology Data Exchange (ETDEWEB)
John D. Bess
2009-11-01
One of the high priority benchmarking activities for corroborating the Next Generation Nuclear Plant (NGNP) Project and Very High Temperature Reactor (VHTR) Program is evaluation of Japan's existing High Temperature Engineering Test Reactor (HTTR). The HTTR is a 30 MWt engineering test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. A large amount of critical reactor physics data is available for validation efforts of High Temperature Gas-cooled Reactors (HTGRs). Previous international reactor physics benchmarking activities provided a collation of mixed results that inaccurately predicted actual experimental performance.1 Reevaluations were performed by the Japanese to reduce the discrepancy between actual and computationally-determined critical configurations.2-3 Current efforts at the Idaho National Laboratory (INL) involve development of reactor physics benchmark models in conjunction with the International Reactor Physics Experiment Evaluation Project (IRPhEP) for use with verification and validation methods in the VHTR Program. Annular cores demonstrate inherent safety characteristics that are of interest in developing future HTGRs.
Benchmarking Outcomes in the Critically Injured Burn Patient
Klein, Matthew B.; Goverman, Jeremy; Hayden, Douglas L.; Fagan, Shawn P.; McDonald-Smith, Grace P.; Alexander, Andrew K.; Gamelli, Richard L.; Gibran, Nicole S.; Finnerty, Celeste C.; Jeschke, Marc G.; Arnoldo, Brett; Wispelwey, Bram; Mindrinos, Michael N.; Xiao, Wenzhong; Honari, Shari E.; Mason, Philip H.; Schoenfeld, David A.; Herndon, David N.; Tompkins, Ronald G.
2014-01-01
Objective To determine and compare outcomes with accepted benchmarks in burn care at six academic burn centers. Background Since the 1960s, U.S. morbidity and mortality rates have declined tremendously for burn patients, likely related to improvements in surgical and critical care treatment. We describe the baseline patient characteristics and well-defined outcomes for major burn injuries. Methods We followed 300 adults and 241 children from 2003–2009 through hospitalization using standard operating procedures developed at study onset. We created an extensive database on patient and injury characteristics, anatomic and physiological derangement, clinical treatment, and outcomes. These data were compared with existing benchmarks in burn care. Results Study patients were critically injured as demonstrated by mean %TBSA (41.2±18.3 for adults and 57.8±18.2 for children) and presence of inhalation injury in 38% of the adults and 54.8% of the children. Mortality in adults was 14.1% for those less than 55 years old and 38.5% for those age ≥55 years. Mortality in patients less than 17 years old was 7.9%. Overall, the multiple organ failure rate was 27%. When controlling for age and %TBSA, presence of inhalation injury was not significant. Conclusions This study provides the current benchmark for major burn patients. Mortality rates, notwithstanding significant % TBSA and presence of inhalation injury, have significantly declined compared to previous benchmarks. Modern day surgical and medically intensive management has markedly improved to the point where we can expect patients less than 55 years old with severe burn injuries and inhalation injury to survive these devastating conditions. PMID:24722222
Criticality benchmark guide for light-water-reactor fuel in transportation and storage packages
International Nuclear Information System (INIS)
This report is designed as a guide for performing criticality benchmark calculations for light-water-reactor (LWR) fuel applications. The guide provides documentation of 180 criticality experiments with geometries, materials, and neutron interaction characteristics representative of transportation packages containing LWR fuel or uranium oxide pellets or powder. These experiments should benefit the U.S. Nuclear Regulatory Commission (NRC) staff and licensees in validation of computational methods used in LWR fuel storage and transportation concerns. The experiments are classified by key parameters such as enrichment, water/fuel volume, hydrogen-to-fissile ratio (H/X), and lattice pitch. Groups of experiments with common features such as separator plates, shielding walls, and soluble boron are also identified. In addition, a sample validation using these experiments and a statistical analysis of the results are provided. Recommendations for selecting suitable experiments and determination of calculational bias and uncertainty are presented as part of this benchmark guide
International Nuclear Information System (INIS)
Full text: The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in October of 1992 by the Department of Energy Defence Programs, now NNSA. The U.S. effort to support and provide leadership for the ICSBEP has been funded by DOE-DP since that time. The project is managed through the Idaho National Engineering and Environmental Laboratory (INEEL), but involves nationally known criticality safety experts from Los Alamos National Laboratory, Lawrence Livermore National Laboratory, Savannah River Technology Center, Oak Ridge National Laboratory and the Y-12 Plant, Hanford, Argonne National Laboratory, and the Rocky Flat Plant. An International Criticality Safety Data Exchange component was added to the project during 1994. Representatives from the United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Yugoslavia, Kazakhstan, Spain, Israel, Brazil, and Poland are now participating on the project and China, South Africa, and the Czech Republic have indicated that they plan to contribute to the project. The ICSBEP is an official activity of the OECD-NEA. The United States is the lead country, providing most of the administrative support. The purpose of the ICSBEP is to: 1. Identify and evaluate a comprehensive set of criticality related benchmark data. 2. Verify the data, to the extent possible, by reviewing original and subsequently revised documentation, logbook data when possible, and by talking with the experimenters or individuals who are familiar with the experimenters or the experimental facility. 3. Compile the data into a standardized format. 4. Perform calculations of each experiment with standard criticality safety codes. 5. Formally document the work into a single source of verified and internationally peer reviewed benchmark critical data. Each experiment evaluation undergoes a thorough internal review by someone within the evaluator's organization. The internal reviewers verifies: 1. The
International Nuclear Information System (INIS)
In this paper we present the results of our calculations of the OECD NEA benchmark on generation-IV advanced sodium-cooled fast reactor (SFR) concepts. The aim of this benchmark is to study the core design features, moreover the feedback and transient behaviour of four SFR concepts. At the present state, static global neutronic parameters, e.g. keff, effective delayed neutron fraction, Doppler constant, sodium void worth, control rod worth, power distribution; and burnup were calculated for both the beginning and the end of cycle. In the benchmark definition, the following core descriptions were specified: two large cores (3600 MW thermal power) with carbide and oxide fuel, and two medium cores (1000 MW thermal power) with metal and oxide fuel. The calculations were performed by using the ECCO module of the ERANOS code system at the subassembly level, and with the KIKO3DMG code at the core level. The former code produced the assembly homogenized cross sections applying 1968 group collision probability calculations; the latter one determined the core multiplication factor, the radial power distribution using a 3D nodal diffusion method in 9 energy groups. We examined the effects of increasing the energy groups to 17 in the core calculation. The reflector and shield assembly homogenization methodology was also tested: a “homogeneous region model” was compared with a “concentric cylindrical core” calculation. The breeding ratio was also determined for the beginning of cycle. (author)
40 CFR 141.543 - How is the disinfection benchmark calculated?
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false How is the disinfection benchmark... Disinfection-Systems Serving Fewer Than 10,000 People Disinfection Benchmark § 141.543 How is the disinfection benchmark calculated? If your system is making a significant change to its disinfection practice, it...
47 CFR 54.805 - Zone and study area above benchmark revenues calculated by the Administrator.
2010-10-01
... 47 Telecommunication 3 2010-10-01 2010-10-01 false Zone and study area above benchmark revenues... Mechanism § 54.805 Zone and study area above benchmark revenues calculated by the Administrator. (a) The following steps shall be performed by the Administrator to determine Zone Above Benchmark Revenues for...
Benchmarking analytical calculations of proton doses in heterogeneous matter.
Ciangaru, George; Polf, Jerimy C; Bues, Martin; Smith, Alfred R
2005-12-01
A proton dose computational algorithm, performing an analytical superposition of infinitely narrow proton beamlets (ASPB) is introduced. The algorithm uses the standard pencil beam technique of laterally distributing the central axis broad beam doses according to the Moliere scattering theory extended to slablike varying density media. The purpose of this study was to determine the accuracy of our computational tool by comparing it with experimental and Monte Carlo (MC) simulation data as benchmarks. In the tests, parallel wide beams of protons were scattered in water phantoms containing embedded air and bone materials with simple geometrical forms and spatial dimensions of a few centimeters. For homogeneous water and bone phantoms, the proton doses we calculated with the ASPB algorithm were found very comparable to experimental and MC data. For layered bone slab inhomogeneity in water, the comparison between our analytical calculation and the MC simulation showed reasonable agreement, even when the inhomogeneity was placed at the Bragg peak depth. There also was reasonable agreement for the parallelepiped bone block inhomogeneity placed at various depths, except for cases in which the bone was located in the region of the Bragg peak, when discrepancies were as large as more than 10%. When the inhomogeneity was in the form of abutting air-bone slabs, discrepancies of as much as 8% occurred in the lateral dose profiles on the air cavity side of the phantom. Additionally, the analytical depth-dose calculations disagreed with the MC calculations within 3% of the Bragg peak dose, at the entry and midway depths in the phantom. The distal depth-dose 20%-80% fall-off widths and ranges calculated with our algorithm and the MC simulation were generally within 0.1 cm of agreement. The analytical lateral-dose profile calculations showed smaller (by less than 0.1 cm) 20%-80% penumbra widths and shorter fall-off tails than did those calculated by the MC simulations. Overall
DEFF Research Database (Denmark)
Grandjean, Philippe; Budtz-Joergensen, Esben
2013-01-01
BACKGROUND: Immune suppression may be a critical effect associated with exposure to perfluorinated compounds (PFCs), as indicated by recent data on vaccine antibody responses in children. Therefore, this information may be crucial when deciding on exposure limits. METHODS: Results obtained from...... follow-up of a Faroese birth cohort were used. Serum-PFC concentrations were measured at age 5 years, and serum antibody concentrations against tetanus and diphtheria toxoids were obtained at ages 7 years. Benchmark dose results were calculated in terms of serum concentrations for 431 children...
IPEN/MB-01 heavy reflector benchmark calculations using Serpent code
International Nuclear Information System (INIS)
A series of critical experiments with water-moderated square-pitched lattices with low-enriched uranium fuel rods was conducted at the IPEN/MB-01 research reactor facility, in 2005. Later, this data become some benchmarks. In one of these experiments the west face of the reactor core was covered with a set of thin SS-304 plates to simulate a heavy reflector as used in the EPR reactor (LEU-COMP-HERM-043). The plates are 3 mm thick and their width and axial length were large enough to cover one whole side of the active core of the reactor. The critical configurations were found as a function of the number of plates. Fuel rods containing UO2 with uranium enriched to 4.3% 235U were arranged in specific geometric configurations to be as close as possible to the critical state. In this work, these benchmark configurations with heavy reflectors were modeled using the Serpent Monte Carlo Code. Serpent uses a universe-based geometry model, which allows the description of practically any three-dimensional fuel or reactor configuration. Neutron transport is based on a combination of surface-to-surface ray-tracing and the Woodcock delta-tracking method. Woodcock method is many times faster than ray-tracing, so compared to MCNP code, Serpent code can bring huge gains in processing time of reactor calculations and reaction rate calculations. The results of these calculations were compared with experimental data and calculations with codes MCNP5 and SCALE6 (KENO-VI) using ENDF/B-VII.0 as cross-section input data. The codes performances are compared in terms of CPU calculation time and agreement with experimental data. Additional y, sensitivity on keff of Serpent woodcock threshold parameter was analyzed. (author)
Preparation of a criticality benchmark based on experiments performed at the RA-6 reactor
International Nuclear Information System (INIS)
The operation and fuel management of a reactor uses neutronic modeling to predict its behavior in operational and accidental conditions. This modeling uses computational tools and nuclear data that must be contrasted against benchmark experiments to ensure its accuracy. These benchmarks have to be simple enough to be possible to model with the desired computer code and have quantified and bound uncertainties. The start-up of the RA-6 reactor, final stage of the conversion and renewal project, allowed us to obtain experimental results with fresh fuel. In this condition the material composition of the fuel elements is precisely known, which contributes to a more precise modeling of the critical condition. These experimental results are useful to evaluate the precision of the models used to design the core, based on U3Si2 and cadmium wires as burnable poisons, for which no data was previously available. The analysis of this information can be used to validate models for the analysis of similar configurations, which is necessary to follow the operational history of the reactor and perform fuel management. The analysis of the results and the generation of the model were done following the methodology established by International Criticality Safety Benchmark Evaluation Project, which gathers and analyzes experimental data for critical systems. The results were very satisfactory resulting on a value for the multiplication factor of the model of 1.0000 ± 0.0044, and a calculated value of 0.9980 ± 0.0001 using MCNP 5 and ENDF/B-VI. The utilization of as-built dimensions and compositions, and the sensitivity analysis allowed us to review the design calculations and analyze their precision, accuracy and error compensation.
Calculational Benchmark Problems for VVER-1000 Mixed Oxide Fuel Cycle
Energy Technology Data Exchange (ETDEWEB)
Emmett, M.B.
2000-03-17
Standard problems were created to test the ability of American and Russian computational methods and data regarding the analysis of the storage and handling of Russian pressurized water reactor (VVER) mixed oxide fuel. Criticality safety and radiation shielding problems were analyzed. Analysis of American and Russian multiplication factors for fresh fuel storage for low-enriched uranium (UOX), weapons- (MOX-W) and reactor-grade (MOX-R) MOX differ by less than 2% for all variations of water density. For shielding calculations for fresh fuel, the ORNL results for the neutron source differ from the Russian results by less than 1% for UOX and MOX-R and by approximately 3% for MOX-W. For shielding calculations for fresh fuel assemblies, neutron dose rates at the surface of the assemblies differ from the Russian results by 5% to 9%; the level of agreement for gamma dose varies depending on the type of fuel, with UOX differing by the largest amount. The use of different gamma group structures and instantaneous versus asymptotic decay assumptions also complicate the comparison. For the calculation of dose rates from spent fuel in a shipping cask, the neutron source for UOX after 3-year cooling is within 1% and for MOX-W within 5% of one of the Russian results while the MOX-R difference is the largest at over 10%. These studies are a portion of the documentation required by the Russian nuclear regulatory authority, GAN, in order to certify Russian programs and data as being acceptably accurate for the analysis of mixed oxide fuels.
Results of the isotopic concentrations of VVER calculational burnup credit benchmark No. 2(CB2)
International Nuclear Information System (INIS)
Results of the nuclide concentrations are presented of VVER Burnup Credit Benchmark No. 2(CB2) that were performed in The Nuclear Technology Center of Cuba with available codes and libraries. The CB2 benchmark specification as the second phase of the VVER burnup credit benchmark is summarized. The CB2 benchmark focused on VVER burnup credit study proposed on the 97' AER Symposium. The obtained results are isotopic concentrations of spent fuel as a function of the burnup and cooling time. The depletion point 'ORIGEN2' code and other codes were used for the calculation of the spent fuel concentration. (author)
OECD/NEA burnup credit criticality benchmark. Result of phase IIA
International Nuclear Information System (INIS)
The report describes the final result of the Phase IIA of the Burnup Credit Criticality Benchmark conducted by OECD/NEA. In the Phase IIA benchmark problems, the effect of an axial burnup profile of PWR spent fuels on criticality (end effect) has been studied. The axial profiles at 10, 30 and 50 GWd/t burnup have been considered. In total, 22 results from 18 institutes of 10 countries have been submitted. The calculated multiplication factors from the participants have lain within the band of ± 1% Δk. For the irradiation up to 30 GWd/t, the end effect has been found to be less than 1.0% Δk. But, for the 50 GWd/t case, the effect is more than 4.0% Δk when both actinides and FPs are taken into account, whereas it remains less than 1.0% Δk when only actinides are considered. The fission density data have indicated the importance end regions have in the criticality safety analysis of spent fuel systems. (author)
Benchmark models and experimental data for a U(20) polyethylene-moderated critical system
Energy Technology Data Exchange (ETDEWEB)
Wetzel, Larry [Babcock & Wilcox Nuclear Operations Group Inc.; Busch, Robert D. [University of New Mexico, Albuquerque; Bowen, Douglas G [ORNL
2015-01-01
This work involves the analysis of recent experiments performed on the Aerojet General Nucleonics (AGN)-201M (AGN) polyethylene-moderated research reactor at the University of New Mexico (UNM). The experiments include 36 delayed critical (DC) configurations and 11 positive-period and rod-drop measurements (transient sequences). The Even Parity Neutron Transport (EVENT) radiation transport code was chosen to analyze these steady state and time-dependent experimental configurations. The UNM AGN specifications provided in a benchmark calculation report (2007) were used to initiate AGN EVENT model development and to test the EVENT AGN calculation methodology. The results of the EVENT DC experimental analyses compared well with the experimental data; the average AGN EVENT calculation bias in the k_{eff} is –0.0048% for the Legrendre Flux Expansion Order of 11 (P_{11}) cases and +0.0119% for the P_{13} cases. The EVENT transient analysis also compared well with the AGN experimental data with respect to predicting the reactor period and control rod worth values. This paper discusses the benchmark models used, the recent experimental configurations, and the EVENT experimental analysis.
Computer simulation of Masurca critical and subcritical experiments. Muse-4 benchmark. Final report
International Nuclear Information System (INIS)
The efficient and safe management of spent fuel produced during the operation of commercial nuclear power plants is an important issue. In this context, partitioning and transmutation (P and T) of minor actinides and long-lived fission products can play an important role, significantly reducing the burden on geological repositories of nuclear waste and allowing their more effective use. Various systems, including existing reactors, fast reactors and advanced systems have been considered to optimise the transmutation scheme. Recently, many countries have shown interest in accelerator-driven systems (ADS) due to their potential for transmutation of minor actinides. Much R and D work is still required in order to demonstrate their desired capability as a whole system, and the current analysis methods and nuclear data for minor actinide burners are not as well established as those for conventionally-fuelled systems. Recognizing a need for code and data validation in this area, the Nuclear Science Committee of the OECD/NEA has organised various theoretical benchmarks on ADS burners. Many improvements and clarifications concerning nuclear data and calculation methods have been achieved. However, some significant discrepancies for important parameters are not fully understood and still require clarification. Therefore, this international benchmark based on MASURCA experiments, which were carried out under the auspices of the EC 5. Framework Programme, was launched in December 2001 in co-operation with the CEA (France) and CIEMAT (Spain). The benchmark model was oriented to compare simulation predictions based on available codes and nuclear data libraries with experimental data related to TRU transmutation, criticality constants and time evolution of the neutronic flux following source variation, within liquid metal fast subcritical systems. A total of 16 different institutions participated in this first experiment based benchmark, providing 34 solutions. The large number
Calculation of the CB1 burnup credit benchmark reaction rates with MCNP4B
International Nuclear Information System (INIS)
The first calculational VVER-440 burnup credit benchmark CB1 in 1996. VTT Energy participated in the calculation of the CB1 benchmark with three different codes: CASMO-4, KENO-VI and MCNP4B. However, the reaction rates and the fission ν were calculated only with CASMO-4. Now, the neutron absorption and production reaction rates and the fission ν values have been calculated at VTT Energy with the MCNP4B Monte Carlo code using the ENDF60 neutron data library. (author)
Interactions of model biomolecules. Benchmark CC calculations within MOLCAS
Energy Technology Data Exchange (ETDEWEB)
Urban, Miroslav [Slovak University of Technology in Bratislava, Faculty of Materials Science and Technology in Trnava, Institute of Materials Science, Bottova 25, SK-917 24 Trnava, Slovakia and Department of Physical and Theoretical Chemistry, Faculty of Natural Scie (Slovakia); Pitoňák, Michal; Neogrády, Pavel; Dedíková, Pavlína [Department of Physical and Theoretical Chemistry, Faculty of Natural Sciences, Comenius University, Mlynská dolina, SK-842 15 Bratislava (Slovakia); Hobza, Pavel [Institute of Organic Chemistry and Biochemistry and Center for Complex Molecular Systems and biomolecules, Academy of Sciences of the Czech Republic, Prague (Czech Republic)
2015-01-22
We present results using the OVOS approach (Optimized Virtual Orbitals Space) aimed at enhancing the effectiveness of the Coupled Cluster calculations. This approach allows to reduce the total computer time required for large-scale CCSD(T) calculations about ten times when the original full virtual space is reduced to about 50% of its original size without affecting the accuracy. The method is implemented in the MOLCAS computer program. When combined with the Cholesky decomposition of the two-electron integrals and suitable parallelization it allows calculations which were formerly prohibitively too demanding. We focused ourselves to accurate calculations of the hydrogen bonded and the stacking interactions of the model biomolecules. Interaction energies of the formaldehyde, formamide, benzene, and uracil dimers and the three-body contributions in the cytosine – guanine tetramer are presented. Other applications, as the electron affinity of the uracil affected by solvation are also shortly mentioned.
Development of neutral transport lattice code DENT-2D and benchmark calculation
Energy Technology Data Exchange (ETDEWEB)
Kim, K. S.; Kim, H. Y.; Ji, S. K. [KAERI, Taejon (Korea, Republic of)
2002-05-01
We developed new transport lattice code called DENT-2D (Deterministic Neutral Particle Transport Code in 2-D imensional Space)primarily to generate few- group constants for the reactor physics analysis diffusion codes. This code is designed to be coupled with KAERI reactor analysis nodal code, MASTER [1] ,to complete the design system package. CASMO-3 and HELIOS have been used in generating the few- group constant for MASTER. Currently DENT-2D includes only neutron particle transport calculation in 2-dimensional Cartesian geometry. The characteristics method is adopted for the spatial discretization, which is advantageous for the treatment of the complicated geometry structure and the highly anisotropic scattering. The subgroup method is used for the resonance treatment. B1 approximation has been used to obtain the criticality spectrum considering the leakage effect in the real core situation. The exponential matrix method has been used for the depletion calculation. The results of benchmark calculations show that the prediction capability of DENT-2D is comparable to the other lattice codes such as HELIOS and CASMO-3.
Benchmark calculations for elastic fermion-dimer scattering
Bour, Shahin; Lee, Dean; Meißner, Ulf-G
2012-01-01
We present continuum and lattice calculations for elastic scattering between a fermion and a bound dimer in the shallow binding limit. For the continuum calculation we use the Skorniakov-Ter-Martirosian (STM) integral equation to determine the scattering length and effective range parameter to high precision. For the lattice calculation we use the finite-volume method of L\\"uscher. We take into account topological finite-volume corrections to the dimer binding energy which depend on the momentum of the dimer. After subtracting these effects, we find from the lattice calculation kappa a_fd = 1.174(9) and kappa r_fd = -0.029(13). These results agree well with the continuum values kappa a_fd = 1.17907(1) and kappa r_fd = -0.0383(3) obtained from the STM equation. We discuss applications to cold atomic Fermi gases, deuteron-neutron scattering in the spin-quartet channel, and lattice calculations of scattering for nuclei and hadronic molecules at finite volume.
Results of the isotopic concentrations of VVER calculational burnup credit benchmark no. 2(cb2
International Nuclear Information System (INIS)
The characterization of the irradiated fuel materials is becoming more important with the Increasing use of nuclear energy in the world. The purpose of this document is to present the results of the nuclide concentrations calculated Using Calculation VVER Burnup Credit Benchmark No. 2(CB2). The calculations were Performed in The Nuclear Technology Center of Cuba. The CB2 benchmark specification as the second phase of the VVER burnup credit benchmark is Summarized in [1]. The CB2 benchmark focused on VVER burnup credit study proposed on the 97' AER Symposium [2]. It should provide a comparison of the ability of various code systems And data libraries to predict VVER-440 spent fuel isotopes (isotopic concentrations) using Depletion analysis. This phase of the benchmark calculations is still in progress. CB2 should be finished by summer 1999 and evaluated results could be presented on the next AER Symposium. The obtained results are isotopic concentrations of spent fuel as a function of the burnup and Cooling time. The depletion point ORIGEN2[3] code was used for the calculation of the spent Fuel concentration. The depletion analysis was performed using the VVER-440 irradiated fuel assemblies with in-core Irradiation time of 3 years, burnup of the 30000 mwd/TU, and an after discharge cooling Time of 0 and 1 year. This work also comprises the results obtained by other codes[4].
Benchmark calculations on residue production within the EURISOL DS project; Part I: thin targets
David, J.C; Boudard, A; Doré, D; Leray, S; Rapp, B; Ridikas, D; Thiollière, N
Report on benchmark calculations on residue production in thin targets. Calculations were performed using MCNPX 2.5.0 coupled to a selection of reaction models. The results were compared to nuclide production cross-sections measured in GSI in inverse kinematics
Benchmark calculations on residue production within the EURISOL DS project; Part II: thick targets
David, J.-C; Boudard, A; Doré, D; Leray, S; Rapp, B; Ridikas, D; Thiollière, N
Benchmark calculations on residue production using MCNPX 2.5.0. Calculations were compared to mass-distribution data for 5 different elements measured at ISOLDE, and to specific activities of 28 radionuclides in different places along the thick target measured in Dubna.
Energy Technology Data Exchange (ETDEWEB)
NONE
1998-06-01
This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the Russian Federation during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the contaminated benchmarks that the United States and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.
Benchmark analysis of the DeCART MOC code with the VENUS-2 critical experiment
International Nuclear Information System (INIS)
Computational benchmarks based on well-defined problems with a complete set of input and a unique solution are often used as a means of verifying the reliability of numerical solutions. VENUS is a widely used MOX benchmark problem for the validation of numerical methods and nuclear data set. In this paper, the results of benchmarking the DeCART (Deterministic Core Analysis based on Ray Tracing) integral transport code is reported using the OECD/NEA VENUS-2 MOX benchmark problem. Both 2-D and 3-D DeCART calculations were performed and comparisons are reported with measured data, as well as with the results of other benchmark participants. In general the DeCART results agree well with both the experimental data as well as those of other participants. (authors)
International Nuclear Information System (INIS)
It was determined to perform the second NEACRP benchmark calculation on High Conversion Light Water Reactor (HCLWR) lattices at the 31st NEACRP meeting on October, 1988. The object was to clarify the physics problems induced in the data and method on HCLWR lattice analyses and also to obtain the reference solutions for deterministic codes by using continuous energy Monte Carlo codes. In the new problems, the analysis for the PROTEUS-LWHCR experiments were added. JAERI participated in this benchmark comparison by use of the VIM code (Monte Carlo method) and the SRAC code (collision probability method) with the libraries based on the JENDL-2 file. In this report, all of the calculated results are summarized. Some additional investigation will be also shown on resonance treatment and geometrical modelling relevant to the benchmark calculation. (author)
International Nuclear Information System (INIS)
An accurate determination of damage fluence accumulated by reactor pressure vessels (RPV) as a function of time is essential in order to evaluate the vessel integrity for both pressurized thermal shock (PTS) transients and end-of-life considerations. The desired accuracy for neutron exposure parameters such as displacements per atom or fluence (E > 1 MeV) is of the order of 20 to 30%. However, these types of accuracies can only be obtained realistically by validation of nuclear data and calculational methods in benchmark facilities. The purposes of this paper are to review the needs and requirements for benchmark experiments, to discuss the status of current benchmark experiments, to summarize results and conclusions obtained so far, and to suggest areas where further benchmarking is needed
Computational benchmark for calculation of silane and siloxane thermochemistry.
Cypryk, Marek; Gostyński, Bartłomiej
2016-01-01
Geometries of model chlorosilanes, R3SiCl, silanols, R3SiOH, and disiloxanes, (R3Si)2O, R = H, Me, as well as the thermochemistry of the reactions involving these species were modeled using 11 common density functionals in combination with five basis sets to examine the accuracy and applicability of various theoretical methods in organosilicon chemistry. As the model reactions, the proton affinities of silanols and siloxanes, hydrolysis of chlorosilanes and condensation of silanols to siloxanes were considered. As the reference values, experimental bonding parameters and reaction enthalpies were used wherever available. Where there are no experimental data, W1 and CBS-QB3 values were used instead. For the gas phase conditions, excellent agreement between theoretical CBS-QB3 and W1 and experimental thermochemical values was observed. All DFT methods also give acceptable values and the precision of various functionals used was comparable. No significant advantage of newer more advanced functionals over 'classical' B3LYP and PBEPBE ones was noted. The accuracy of the results was improved significantly when triple-zeta basis sets were used for energy calculations, instead of double-zeta ones. The accuracy of calculations for the reactions in water solution within the SCRF model was inferior compared to the gas phase. However, by careful estimation of corrections to the ΔHsolv and ΔGsolv of H(+) and HCl, reasonable values of thermodynamic quantities for the discussed reactions can be obtained. PMID:26781663
Calculating the Fuzzy Project Network Critical Path
Directory of Open Access Journals (Sweden)
Nasser Shahsavari Pour
2012-04-01
Full Text Available A project network consists of various activities. To determine the length of project time and the amount of the needed sources, the time of project completion must correctly and exactly be calculated, so the critical path is calculated. The activities on this path have no floating. It means that there is no delay on these activities. As a result the calculation of the critical path in a project network has a special importance. In this paper a simple method for calculation the critical path is proposed. Assignment an exact time on any activity in real world is not correct; So the fuzzy and uncertainty theories are used to assigned a length of time on any activities. In the present study the trapezoidal fuzzy numbers are assigned to the length of activity time, and the total time of the project is also a fuzzy number. In addition, to compare the fuzzy numbers, ranking of fuzzy numbers are used. Finally a practical example will show the efficiency of the method.
BENCHMARKING UPGRADED HOTSPOT DOSE CALCULATIONS AGAINST MACCS2 RESULTS
Energy Technology Data Exchange (ETDEWEB)
Brotherton, Kevin
2009-04-30
The radiological consequence of interest for a documented safety analysis (DSA) is the centerline Total Effective Dose Equivalent (TEDE) incurred by the Maximally Exposed Offsite Individual (MOI) evaluated at the 95th percentile consequence level. An upgraded version of HotSpot (Version 2.07) has been developed with the capabilities to read site meteorological data and perform the necessary statistical calculations to determine the 95th percentile consequence result. These capabilities should allow HotSpot to join MACCS2 (Version 1.13.1) and GENII (Version 1.485) as radiological consequence toolbox codes in the Department of Energy (DOE) Safety Software Central Registry. Using the same meteorological data file, scenarios involving a one curie release of {sup 239}Pu were modeled in both HotSpot and MACCS2. Several sets of release conditions were modeled, and the results compared. In each case, input parameter specifications for each code were chosen to match one another as much as the codes would allow. The results from the two codes are in excellent agreement. Slight differences observed in results are explained by algorithm differences.
D.C. Blitz (David)
2011-01-01
textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W
VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4
Energy Technology Data Exchange (ETDEWEB)
Ellis, RJ
2001-02-02
The Task Force on Reactor-Based Plutonium Disposition, now an Expert Group, was set up through the Organization for Economic Cooperation and Development/Nuclear Energy Agency to facilitate technical assessments of burning weapons-grade plutonium mixed-oxide (MOX) fuel in U.S. pressurized-water reactors and Russian VVER nuclear reactors. More than ten countries participated to advance the work of the Task Force in a major initiative, which was a blind benchmark study to compare code benchmark calculations against experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At the Oak Ridge National Laboratory, the HELIOS-1.4 code was used to perform a comprehensive study of pin-cell and core calculations for the VENUS-2 benchmark.
Energy Technology Data Exchange (ETDEWEB)
William Anderson; James Tulenko; Bradley Rearden; Gary Harms
2008-09-11
The nuclear industry interest in advanced fuel and reactor design often drives towards fuel with uranium enrichments greater than 5 wt% 235U. Unfortunately, little data exists, in the form of reactor physics and criticality benchmarks, for uranium enrichments ranging between 5 and 10 wt% 235U. The primary purpose of this project is to provide benchmarks for fuel similar to what may be required for advanced light water reactors (LWRs). These experiments will ultimately provide additional information for application to the criticality-safety bases for commercial fuel facilities handling greater than 5 wt% 235U fuel.
Isopiestic density law of actinide nitrates applied to criticality calculations
International Nuclear Information System (INIS)
Up to now, criticality safety experts used density laws fitted on experimental data and applied them in and outside the measurement range. Depending on the case, such an approach could be wrong for nitrate solutions. Seven components are concerned: UO2(NO3)2, U(NO3)4, Pu(NO3)4, Pu(NO3)3, Th(NO3)4, Am(NO3)3 and HNO3. To get rid of this problem, a new methodology based on the thermodynamic concept of binary electrolytes solutions mixtures at constant water activity, so called 'isopiestic' solutions, has been developed by IRSN to calculate the nitrate solutions density. This article shortly presents the theoretical aspects of the method, its qualification using benchmarks and its implementation in IRSN graphical user interface. (author)
The solution of the LEU and MOX WWER-1000 calculation benchmark with the CARATE - multicell code
International Nuclear Information System (INIS)
Preparations for disposition of weapons grade plutonium in WWER-1000 reactors are in progress. Benchmark: Defined by the Kurchatov Institute (S. Bychkov, M. Kalugin, A. Lazarenko) to assess the applicability of computer codes for weapons grade MOX assembly calculations. Framework: 'Task force on reactor-based plutonium disposition' of OECD Nuclear Energy Agency. (Authors)
A full CI treatment of Ne atom - A benchmark calculation performed on the NAS CRAY 2
Bauschlicher, C. W., Jr.; Langhoff, S. R.; Partridge, H.; Taylor, P. R.
1986-01-01
Full CI calculations are performed for Ne atom using Gaussian basis sets of up to triple-zeta plus double polarization quality. The total valence correlation energy through double, triple, quadruple and octuple excitations is compared for eight different basis sets. These results are expected to be an important benchmark for calibrating methods for estimating the importance of higher excitations.
International Nuclear Information System (INIS)
An accurate knowledge of irradiated fuel composition is of utmost importance regarding properties such as criticality, activity or residual heat generation. These magnitudes are in turn essential to fuel transport and storage and depend on many parameters, from which of course fuel type is essential. In the frame of activities devoted to fuel cycle issues, the NEA WPRS proposed a Depletion Calculation Benchmark to compare results and trends with different codes and libraries. While Phase 1 dealt with UOX fuel, Phase 2 is devoted to MOX fuel. The present paper aims at comparing isotopic compositions for MOX fuel obtained by GRS and AREVA with different codes and libraries. (orig.)
CANISTER HANDLING FACILITY CRITICALITY SAFETY CALCULATIONS
International Nuclear Information System (INIS)
This design calculation revises and updates the previous criticality evaluation for the canister handling, transfer and staging operations to be performed in the Canister Handling Facility (CHF) documented in BSC [Bechtel SAIC Company] 2004 [DIRS 167614]. The purpose of the calculation is to demonstrate that the handling operations of canisters performed in the CHF meet the nuclear criticality safety design criteria specified in the ''Project Design Criteria (PDC) Document'' (BSC 2004 [DIRS 171599], Section 4.9.2.2), the nuclear facility safety requirement in ''Project Requirements Document'' (Canori and Leitner 2003 [DIRS 166275], p. 4-206), the functional/operational nuclear safety requirement in the ''Project Functional and Operational Requirements'' document (Curry 2004 [DIRS 170557], p. 75), and the functional nuclear criticality safety requirements described in the ''Canister Handling Facility Description Document'' (BSC 2004 [DIRS 168992], Sections 3.1.1.3.4.13 and 3.2.3). Specific scope of work contained in this activity consists of updating the Category 1 and 2 event sequence evaluations as identified in the ''Categorization of Event Sequences for License Application'' (BSC 2004 [DIRS 167268], Section 7). The CHF is limited in throughput capacity to handling sealed U.S. Department of Energy (DOE) spent nuclear fuel (SNF) and high-level radioactive waste (HLW) canisters, defense high-level radioactive waste (DHLW), naval canisters, multicanister overpacks (MCOs), vertical dual-purpose canisters (DPCs), and multipurpose canisters (MPCs) (if and when they become available) (BSC 2004 [DIRS 168992], p. 1-1). It should be noted that the design and safety analyses of the naval canisters are the responsibility of the U.S. Department of the Navy (Naval Nuclear Propulsion Program) and will not be included in this document. In addition, this calculation is valid for the current design of the CHF and may not reflect the ongoing design evolution of the facility
International Nuclear Information System (INIS)
This paper provides benchmark comparisons of the MCNPX Monte Carlo code to a series of integral critical experiments performed at the Toshiba Nuclear Critical Assembly (NCA) facility from 1994 to 2001 [1;2]. The beta-1 release version of ENDF/B-VII is used for all nuclides process with NJOY99 (update 96) executed with the beta test version of MCNPX 2.6.A [3]. A total of fifty-two (52) low enriched, UO2 pin-lattice in water experiments were analyzed with experimental W/F ratios from 0.791 to 1.756. The lattices were designed to simulate that of 8 x 8 and 9 x 9 Boiling Water Reactor (BWR) lattices with hollow aluminum tubes inserted between the fuel rods to simulate voiding conditions in approximately half of the experiments. In addition to measured critical lattice configurations, a series of individual pin-power fission density estimates were made via gross gamma scans of individual fuel pins after irradiation. This data is also used to benchmark the Monte Carlo fission density calculations to confirm the code and cross-section applicability for use as a benchmarking tool for the LANCER02 lattice physics code [4]. (authors)
Criticality benchmarks validation of the Monte Carlo code TRIPOLI-2
Energy Technology Data Exchange (ETDEWEB)
Maubert, L. (Commissariat a l' Energie Atomique, Inst. de Protection et de Surete Nucleaire, Service d' Etudes de Criticite, 92 - Fontenay-aux-Roses (France)); Nouri, A. (Commissariat a l' Energie Atomique, Inst. de Protection et de Surete Nucleaire, Service d' Etudes de Criticite, 92 - Fontenay-aux-Roses (France)); Vergnaud, T. (Commissariat a l' Energie Atomique, Direction des Reacteurs Nucleaires, Service d' Etudes des Reacteurs et de Mathematique Appliquees, 91 - Gif-sur-Yvette (France))
1993-04-01
The three-dimensional energy pointwise Monte-Carlo code TRIPOLI-2 includes metallic spheres of uranium and plutonium, nitrate plutonium solutions, square and triangular pitch assemblies of uranium oxide. Results show good agreements between experiments and calculations, and avoid a part of the code and its ENDF-B4 library validation. (orig./DG)
Kahler, A. C.; MacFarlane, R. E.; Mosteller, R. D.; Kiedrowski, B. C.; Frankle, S. C.; Chadwick, M. B.; McKnight, R. D.; Lell, R. M.; Palmiotti, G.; Hiruta, H.; Herman, M.; Arcilla, R.; Mughabghab, S. F.; Sublet, J. C.; Trkov, A.; Trumbull, T. H.; Dunn, M.
2011-12-01
The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., "ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data," Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected 235U and 239Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected
International Nuclear Information System (INIS)
The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 418 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [1]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected 235U and 239Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as 236U capture. Other deficiencies, such as the overprediction of Pu solution system critical eigenvalues and a decreasing trend in calculated eigenvalue for
Energy Technology Data Exchange (ETDEWEB)
Kahler, A. [Los Alamos National Laboratory (LANL); Macfarlane, R E [Los Alamos National Laboratory (LANL); Mosteller, R D [Los Alamos National Laboratory (LANL); Kiedrowski, B C [Los Alamos National Laboratory (LANL); Frankle, S C [Los Alamos National Laboratory (LANL); Chadwick, M. B. [Los Alamos National Laboratory (LANL); Mcknight, R D [Argonne National Laboratory (ANL); Lell, R M [Argonne National Laboratory (ANL); Palmiotti, G [Idaho National Laboratory (INL); Hiruta, h [Idaho National Laboratory (INL); Herman, Micheal W [Brookhaven National Laboratory (BNL); Arcilla, r [Brookhaven National Laboratory (BNL); Mughabghab, S F [Brookhaven National Laboratory (BNL); Sublet, J C [Culham Science Center, Abington, UK; Trkov, A. [Jozef Stefan Institute, Slovenia; Trumbull, T H [Knolls Atomic Power Laboratory; Dunn, Michael E [ORNL
2011-01-01
The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [1]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unrnoderated and uranium reflected (235)U and (239)Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as (236)U; (238,242)Pu and (241,243)Am capture in fast systems. Other deficiencies, such as the overprediction of Pu solution system critical
Energy Technology Data Exchange (ETDEWEB)
Kahler, A.C.; Herman, M.; Kahler,A.C.; MacFarlane,R.E.; Mosteller,R.D.; Kiedrowski,B.C.; Frankle,S.C.; Chadwick,M.B.; McKnight,R.D.; Lell,R.M.; Palmiotti,G.; Hiruta,H.; Herman,M.; Arcilla,R.; Mughabghab,S.F.; Sublet,J.C.; Trkov,A.; Trumbull,T.H.; Dunn,M.
2011-12-01
The ENDF/B-VII.1 library is the latest revision to the United States Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., 'ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data,' Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected {sup 235}U and {sup 239}Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also
Energy Technology Data Exchange (ETDEWEB)
Lara, Rafael G.; Maiorino, Jose R., E-mail: rafael.lara@aluno.ufabc.edu.br, E-mail: joserubens.maiorino@ufabc.edu.br [Universidade Federal do ABC (UFABC), Santo Andre, SP (Brazil). Centro de Engenharia, Modelagem e Ciencias Sociais Aplicadas
2013-07-01
This work aimed at the implementation and qualification of MCNP code in a supercomputer of the Universidade Federal do ABC, so that may be available a next-generation simulation tool for precise calculations of nuclear reactors and systems subject to radiation. The implementation of this tool will have multidisciplinary applications, covering various areas of engineering (nuclear, aerospace, biomedical), radiation physics and others.
Benchmark Testing of a New ^{56}Fe Evaluation for Criticality Safety Applications
Energy Technology Data Exchange (ETDEWEB)
Leal, Luiz C [ORNL; Ivanov, E. [Institut de Radioprotection et de Surete Nucleaire
2015-01-01
The SAMMY code was used to evaluate resonance parameters of the ^{56}Fe cross section in the resolved resonance energy range of 0–2 MeV using transmission data, capture, elastic, inelastic, and double differential elastic cross sections. The resonance analysis was performed with the code SAMMY that fits R-matrix resonance parameters using the generalized least-squares technique (Bayes’ theory). The evaluation yielded a set of resonance parameters that reproduced the experimental data very well, along with a resonance parameter covariance matrix for data uncertainty calculations. Benchmark tests were conducted to assess the evaluation performance in benchmark calculations.
SCALE 5.1 - criticality and inventory calculation for WWER-440 fuel
International Nuclear Information System (INIS)
The latest version of SCALE system (SCALE 5.1) was tested for criticality and inventory calculation for WWER-440 fuel. The criticality calculations (the KENO VI module) were tested on experimental critical cores (393 experiments from ICSBEP) and numerical benchmarks CB1, CB3 and CB4. The cross sections are prepared either by the NITAWL module (original way, used in SCALE 4.x) or by the CENTRM module (in SCALE 5.1 default). In the article are compared results by using both ways. The 44-group and 238-group libraries were used. The inventory calculations (the ORIGEN-S and TRITON modules) were tested on experiments in Russia (measurement in 80-ies, ISTC 2670) and numerical benchmark CB2. The ORIGEN-S module uses the WWER(3.6) library, the TRITON module uses the 44-group library (Authors)
FUEL HANDLING FACILITY CRITICALITY SAFETY CALCULATIONS
Energy Technology Data Exchange (ETDEWEB)
C.E. Sanders
2005-06-30
The purpose of this design calculation is to perform a criticality evaluation of the Fuel Handling Facility (FHF) and the operations and processes performed therein. The current intent of the FHF is to receive transportation casks whose contents will be unloaded and transferred to waste packages (WP) or MGR Specific Casks (MSC) in the fuel transfer bays. Further, the WPs will also be prepared in the FHF for transfer to the sub-surface facility (for disposal). The MSCs will be transferred to the Aging Facility for storage. The criticality evaluation of the FHF features the following: (I) Consider the types of waste to be received in the FHF as specified below: (1) Uncanistered commercial spent nuclear fuel (CSNF); (2) Canistered CSNF (with the exception of horizontal dual-purpose canister (DPC) and/or multi-purpose canisters (MPCs)); (3) Navy canistered SNF (long and short); (4) Department of Energy (DOE) canistered high-level waste (HLW); and (5) DOE canistered SNF (with the exception of MCOs). (II) Evaluate the criticality analyses previously performed for the existing Nuclear Regulatory Commission (NRC)-certified transportation casks (under 10 CFR 71) to be received in the FHF to ensure that these analyses address all FHF conditions including normal operations, and Category 1 and 2 event sequences. (III) Evaluate FHF criticality conditions resulting from various Category 1 and 2 event sequences. Note that there are currently no Category 1 and 2 event sequences identified for FHF. Consequently, potential hazards from a criticality point of view will be considered as identified in the ''Internal Hazards Analysis for License Application'' document (BSC 2004c, Section 6.6.4). (IV) Assess effects of potential moderator intrusion into the fuel transfer bay for defense in depth. The SNF/HLW waste transfer activity (i.e., assembly and canister transfer) that is being carried out in the FHF has been classified as safety category in the &apos
Primer for criticality calculations with DANTSYS
International Nuclear Information System (INIS)
With the closure of many experimental facilities, the nuclear criticality safety analyst is increasingly required to rely on computer calculations to identify safe limits for the handling and storage of fissile materials. However, in many cases, the analyst has little experience with the specific codes available at his or her facility. Typically, two types of codes are available: deterministic codes such as ANISN or DANTSYS that solve an approximate model exactly and Monte Carlo Codes such as KENO or MCNP that solve an exact model approximately. Often, the analyst feels that the deterministic codes are too simple and will not provide the necessary information, so most modeling uses Monte Carlo methods. This sometimes means that hours of effort are expended to produce results available in minutes from deterministic codes. A substantial amount of reliable information on nuclear systems can be obtained using deterministic methods if the user understands their limitations. To guide criticality specialists in this area, the Nuclear Criticality Safety Group at the University of New Mexico in cooperation with the Radiation Transport Group at Los Alamos National Laboratory has designed a primer to help the analyst understand and use the DANTSYS deterministic transport code for nuclear criticality safety analyses. (DANTSYS is the name of a suite of codes that users more commonly know as ONEDANT, TWODANT, TWOHEX, and THREEDANT.) It assumes a college education in a technical field, but there is no assumption of familiarity with neutronics codes in general or with DANTSYS in particular. The primer is designed to teach by example, with each example illustrating two or three DANTSYS features useful in criticality analyses
Institute of Scientific and Technical Information of China (English)
XIAO Hai; LI Jun
2008-01-01
Benchmark calculations on the molar atomization enthalpy, geometry, and vibrational frequencies of uranium hexafluoride (UF6) have been performed by using relativistic density functional theory (DFT) with various levels of relativistic effects, different types of basis sets, and exchange-correlation functionals. Scalar relativistic effects are shown to be critical for the structural properties. The spin-orbit coupling effects are important for the calculated energies, but are much less important for other calculated ground-state properties of closed-shell UF6. We conclude through systematic investigations that ZORA- and RECP-based relativistic DFT methods are both appropriate for incorporating relativistic effects. Comparisons of different types of basis sets (Slater, Gaussian, and plane-wave types) and various levels of theoretical approximation of the exchange-correlation functionals were also made.
Criticality experiments and benchmarks for cross section evaluation: the neptunium case
Directory of Open Access Journals (Sweden)
Duran I.
2013-03-01
Full Text Available The 237Np neutron-induced fission cross section has been recently measured in a large energy range (from eV to GeV at the n_TOF facility at CERN. When compared to previous measurement the n_TOF fission cross section appears to be higher by 5-7% beyond the fission threshold. To check the relevance of n_TOF data, we apply a criticality experiment performed at Los Alamos with a 6 kg sphere of 237Np, surrounded by enriched uranium 235U so as to approach criticality with fast neutrons. The multiplication factor ke f f of the calculation is in better agreement with the experiment (the deviation of 750 pcm is reduced to 250 pcm when we replace the ENDF/B-VII.0 evaluation of the 237Np fission cross section by the n_TOF data. We also explore the hypothesis of deficiencies of the inelastic cross section in 235U which has been invoked by some authors to explain the deviation of 750 pcm. With compare to inelastic large distortion calculation, it is incompatible with existing measurements. Also we show that the v of 237Np can hardly be incriminated because of the high accuracy of the existing data. Fission rate ratios or averaged fission cross sections measured in several fast neutron fields seem to give contradictory results on the validation of the 237Np cross section but at least one of the benchmark experiments, where the active deposits have been well calibrated for the number of atoms, favors the n_TOF data set. These outcomes support the hypothesis of a higher fission cross section of 237Np.
Criticality experiments and benchmarks for cross section evaluation: the neptunium case
Leong, L. S.; Tassan-Got, L.; Audouin, L.; Paradela, C.; Wilson, J. N.; Tarrio, D.; Berthier, B.; Duran, I.; Le Naour, C.; Stéphan, C.
2013-03-01
The 237Np neutron-induced fission cross section has been recently measured in a large energy range (from eV to GeV) at the n_TOF facility at CERN. When compared to previous measurement the n_TOF fission cross section appears to be higher by 5-7% beyond the fission threshold. To check the relevance of n_TOF data, we apply a criticality experiment performed at Los Alamos with a 6 kg sphere of 237Np, surrounded by enriched uranium 235U so as to approach criticality with fast neutrons. The multiplication factor ke f f of the calculation is in better agreement with the experiment (the deviation of 750 pcm is reduced to 250 pcm) when we replace the ENDF/B-VII.0 evaluation of the 237Np fission cross section by the n_TOF data. We also explore the hypothesis of deficiencies of the inelastic cross section in 235U which has been invoked by some authors to explain the deviation of 750 pcm. With compare to inelastic large distortion calculation, it is incompatible with existing measurements. Also we show that the v of 237Np can hardly be incriminated because of the high accuracy of the existing data. Fission rate ratios or averaged fission cross sections measured in several fast neutron fields seem to give contradictory results on the validation of the 237Np cross section but at least one of the benchmark experiments, where the active deposits have been well calibrated for the number of atoms, favors the n_TOF data set. These outcomes support the hypothesis of a higher fission cross section of 237Np.
EA-MC Neutronic Calculations on IAEA ADS Benchmark 3.2
International Nuclear Information System (INIS)
The neutronics and the transmutation properties of the IAEA ADS benchmark 3.2 setup, the 'Yalina' experiment or ISTC project B-70, have been studied through an extensive amount of 3-D Monte Carlo calculations at CERN. The simulations were performed with the state-of-the-art computer code package EA-MC, developed at CERN. The calculational approach is outlined and the results are presented in accordance with the guidelines given in the benchmark description. A variety of experimental conditions and parameters are examined; three different fuel rod configurations and three types of neutron sources are applied to the system. Reactivity change effects introduced by removal of fuel rods in both central and peripheral positions are also computed. Irradiation samples located in a total of 8 geometrical positions are examined. Calculations of capture reaction rates in 129I, 237Np and 243Am samples and of fission reaction rates in 235U, 237Np and 243Am samples are presented. Simulated neutron flux densities and energy spectra as well as spectral indices inside experimental channels are also given according to benchmark specifications. Two different nuclear data libraries, JAR-95 and JENDL-3.2, are applied for the calculations
Peneliau, Y.; Litaize, O.; Archier, P.; De Saint Jean, C.
2014-04-01
A large set of nuclear data are investigated to improve the calculation predictions of the new neutron transport simulation codes. With the next generation of nuclear power plants (GEN IV projects), one expects to reduce the calculated uncertainties which are mainly coming from nuclear data and are still very important, before taking into account integral information in the adjustment process. In France, future nuclear power plant concepts will probably use MOX fuel, either in Sodium Fast Reactors or in Gas Cooled Fast Reactors. Consequently, the knowledge of 239Pu cross sections and other nuclear data is crucial issue in order to reduce these sources of uncertainty. The Prompt Fission Neutron Spectra (PFNS) for 239Pu are part of these relevant data (an IAEA working group is even dedicated to PFNS) and the work presented here deals with this particular topic. The main international data files (i.e. JEFF-3.1.1, ENDF/B-VII.0, JENDL-4.0, BRC-2009) have been considered and compared with two different spectra, coming from the works of Maslov and Kornilov respectively. The spectra are first compared by calculating their mathematical moments in order to characterize them. Then, a reference calculation using the whole JEFF-3.1.1 evaluation file is performed and compared with another calculation performed with a new evaluation file, in which the data block containing the fission spectra (MF=5, MT=18) is replaced by the investigated spectra (one for each evaluation). A set of benchmarks is used to analyze the effects of PFNS, covering criticality cases and mock-up cases in various neutron flux spectra (thermal, intermediate, and fast flux spectra). Data coming from many ICSBEP experiments are used (PU-SOL-THERM, PU-MET-FAST, PU-MET-INTER and PU-MET-MIXED) and French mock-up experiments are also investigated (EOLE for thermal neutron flux spectrum and MASURCA for fast neutron flux spectrum). This study shows that many experiments and neutron parameters are very sensitive to
VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4 - Revised Report
Energy Technology Data Exchange (ETDEWEB)
Ellis, RJ
2001-06-01
The Task Force on Reactor-Based Plutonium Disposition (TFRPD) was formed by the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) to study reactor physics, fuel performance, and fuel cycle issues related to the disposition of weapons-grade (WG) plutonium as mixed-oxide (MOX) reactor fuel. To advance the goals of the TFRPD, 10 countries and 12 institutions participated in a major TFRPD activity: a blind benchmark study to compare code calculations to experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At Oak Ridge National Laboratory, the HELIOS-1.4 code system was used to perform the comprehensive study of pin-cell and MOX core calculations for the VENUS-2 MOX core benchmark study.
Rubtsova, O A; Moro, A M
2008-01-01
The direct comparison of two different continuum discretization methods towards the solution of a composite particle scattering off a nucleus is presented. The first approach -- the Continumm-Discretized Coupled Channel method -- is based on the differential equation formalism, while the second one -- the Wave-Packet Continuum Discretization method -- uses the integral equation formulation for the composite-particle scattering problem. As benchmark calculations we have chosen the deuteron off \
Energy Technology Data Exchange (ETDEWEB)
Dudnikov, A.A.; Alekseev, P.N.; Subbotin, S.A.; Vasiliev, A.V.; Abagyan, L.P.; Alexeyev, N.I.; Gomin, E.A.; Ponomarev, L.I.; Kolyaskin, O.E.; Men' shikov, L.I. [Russian Research Centre Kurchatov Inst., Moscow (Russian Federation); Kolesov, V.F.; Ivanin, I.A.; Zavialov, N.V. [Russian Federal Nuclear Center, RFNC-VNIIEF, Nizhnii Novgorod region (Russian Federation)
2001-07-01
The facility for incineration of long-lived minor actinides and some dangerous fission products should be an important feature of the future nuclear power (NP). For many reasons the liquid-fuel reactor driven by accelerator can be considered as the perspective reactor- burner for radioactive waste. The fuel of such reactor is the fluoride molten salt composition with minor actinides (Np, Cm, Am) and some fission products ({sup 99}Tc, {sup 129}I, etc.). Preliminary analysis shows that the values of keff, calculated with different codes and nuclear data differ up to several percents for such fuel compositions. Reliable critical and subcritical benchmark experiments with molten salt fuel compositions with significant quantities of minor actinides are absent. One of the main tasks for the numerical study of this problem is the estimation of nuclear data for such fuel compositions and verification of the different numerical codes used for the calculation of keff, neutron spectra and reaction rates. It is especially important for the resonance region where experimental data are poor or absent. The calculation benchmark of the cascade subcritical molten salt reactor is developed. For the chosen nuclear fuel composition the comparison of the results obtained by three different Monte-Carlo codes (MCNP4A, MCU, and C95) using three different nuclear data libraries are presented. This report concerns the investigation of subcritical molten salt reactor unit main peculiarities carried out at the beginning of ISTC project 1486. (author)
International Nuclear Information System (INIS)
The facility for incineration of long-lived minor actinides and some dangerous fission products should be an important feature of the future nuclear power (NP). For many reasons the liquid-fuel reactor driven by accelerator can be considered as the perspective reactor- burner for radioactive waste. The fuel of such reactor is the fluoride molten salt composition with minor actinides (Np, Cm, Am) and some fission products (99Tc, 129I, etc.). Preliminary analysis shows that the values of keff, calculated with different codes and nuclear data differ up to several percents for such fuel compositions. Reliable critical and subcritical benchmark experiments with molten salt fuel compositions with significant quantities of minor actinides are absent. One of the main tasks for the numerical study of this problem is the estimation of nuclear data for such fuel compositions and verification of the different numerical codes used for the calculation of keff, neutron spectra and reaction rates. It is especially important for the resonance region where experimental data are poor or absent. The calculation benchmark of the cascade subcritical molten salt reactor is developed. For the chosen nuclear fuel composition the comparison of the results obtained by three different Monte-Carlo codes (MCNP4A, MCU, and C95) using three different nuclear data libraries are presented. This report concerns the investigation of subcritical molten salt reactor unit main peculiarities carried out at the beginning of ISTC project 1486. (author)
International Nuclear Information System (INIS)
This paper describes details of the IAEA/CRP benchmark calculation by JAEA on the control rod withdrawal test in the Phenix End-of-Life Experiments. The power distribution deviation by the control rod insertion/withdrawal, which is the major target of the benchmark, is well simulated by calculation. In addition to the CRP activities, neutron and photon heat transport effect is evaluated in the nuclear heating calculation of the benchmark analysis. It is confirmed that the neutron and photon heat transport effect contributes to the improvement of the absolute power calculation results in the breeder blanket region. (author)
Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark
International Nuclear Information System (INIS)
There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as
Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark.
Renner, F; Wulff, J; Kapsch, R-P; Zink, K
2015-10-01
There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as
Energy Technology Data Exchange (ETDEWEB)
Primm III, RT
2002-05-29
This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the US during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the computational benchmarks and for those experimental benchmarks that the US and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.
Benchmarking of calculation schemes in Apollo2 and COBAYA3 for VVER lattices
Zheleva, Nonka; Ivanov, Plamen; Todorova, Galina; Kolev, Nikola; Herrero Carrascosa, José Javier
2013-01-01
This paper presents solutions of the NURISP VVER lattice benchmark using APOLLO2, TRIPOLI4 and COBAYA3 pin-by-pin. The main objective is to validate MOC based calculation schemes for pin-by-pin cross-section generation with APOLLO2 against TRIPOLI4 reference results. A specific objective is to test the APOLLO2 generated cross-sections and interface discontinuity factors in COBAYA3 pin-by-pin calculations with unstructured mesh. The VVER-1000 core consists of large hexagonal assemblies with 2m...
International Nuclear Information System (INIS)
The paper gives a brief survey of the fifth three-dimensional dynamic Atomic Energy Research benchmark calculation results received with the code DYN3D/ATHLET at NRI Rez. This benchmark was defined at the seventh Atomic Energy Research Symposium (Hoernitz near Zittau, 1997). Its initiating event is a symmetrical break of the main steam header at the end of the first fuel cycle and hot shutdown conditions with one stuck out control rod group. The calculations were performed with the externally coupled codes ATHLET Mod.1.1 Cycle C and DYN3DH1.1/M3. The standard WWER-440/213 input deck of ATHLET code was adopted for benchmark purposes and for coupling with the code DYN3D. The first part of paper contains a brief characteristics of NPP input deck and reactor core model. The second part shows the time dependencies of important global and local parameters. In comparison with the results published at the eighth Atomic Energy Research Symposium (Bystrice nad Pernstejnem, 1998), the results published in this paper are based on improved ATHLET descriptions of control and safety systems. (Author)
Results of the fifth three-dimensional dynamic atomic energy research benchmark problem calculation
International Nuclear Information System (INIS)
The pare gives a brief survey of the fifth three-dimensional dynamic atomic energy research benchmark calculation results received with the code DYN3D/ATHLET at NRI Rez. This benchmark was defined at the seventh AER Symposium. Its initiating event is a symmetrical break of the main steam header at the end of the first fuel cycle and hot shutdown conditions with one stuck out control rot group. The calculations were performed with the externally coupled codes ATHLET Mod.1.1 Cycle C and DYN3DH1.1/M3. The Kasseta library was used for the generation of reactor core neutronic parameters. The standard WWER-440/213 input deck of ATHLET code was adopted for benchmark purposes and for coupling with the code DYN3D. The first part of paper contains a brief characteristics of NPP input deck and reactor core model. The second part shows the time dependencies of important global, fuel assembly and loops parameters.(Author)
Subgroup Benchmark Calculations for the Intra-Pellet Nonuniform Temperature Cases
Energy Technology Data Exchange (ETDEWEB)
Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jung, Yeon Sang [Seoul National Univ. (Korea, Republic of); Liu, Yuxuan [Univ. of Michigan, Ann Arbor, MI (United States); Joo, Han Gyu [Seoul National Univ. (Korea, Republic of)
2016-08-01
A benchmark suite has been developed by Seoul National University (SNU) for intra-pellet nonuniform temperature distribution cases based on the practical temperature profiles according to the thermal power levels. Though a new subgroup capability for nonuniform temperature distribution was implemented in MPACT, no validation calculation has been performed for the new capability. This study focuses on benchmarking the new capability through a code-to-code comparison. Two continuous-energy Monte Carlo codes, McCARD and CE-KENO, are engaged in obtaining reference solutions, and the MPACT results are compared to the SNU nTRACER using a similar cross section library and subgroup method to obtain self-shielded cross sections.
Subgroup Benchmark Calculations for the Intra-Pellet Nonuniform Temperature Cases
Energy Technology Data Exchange (ETDEWEB)
Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jung, Yeon Sang [Seoul National Univ. (Korea, Republic of); Liu, Yuxuan [Univ. of Michigan, Ann Arbor, MI (United States); Joo, Han Gyu [Seoul National Univ. (Korea, Republic of)
2016-08-01
A benchmark suite has been developed by Seoul National University (SNU) for intrapellet nonuniform temperature distribution cases based on the practical temperature profiles according to the thermal power levels. Though a new subgroup capability for nonuniform temperature distribution was implemented in MPACT, no validation calculation has been performed for the new capability. This study focuses on bench-marking the new capability through a code-to-code comparison. Two continuous-energy Monte Carlo codes, McCARD and CE-KENO, are engaged in obtaining reference solutions, and the MPACT results are compared to the SNU nTRACER using a similar cross section library and subgroup method to obtain self-shielded cross sections.
Benchmarking of MCNP against B ampersand W LRC Core XI critical experiments
International Nuclear Information System (INIS)
The MCNP Monte Carlo code and its ENDF/B-V continuous-energy cross- section library previously has been benchmarked against a variety of critical experiments, and that benchmarking recently has been extended to include its ENDF/B-VI continuous-energy cross-section library and additional critical experiments. This study further extends the benchmarking of MCNP and its two continuous-energy libraries to 17 large-scale mockup experiments that closely resemble the core of a pressurized water reactor (PWR). The experiments were performed at Babcock ampersand Wilcox's Lynchburg Research Center in 1970 and 1971. The series was designated as Core XI, and the individual experiments were characterized as different ''loadings.'' The experiments were performed inside a large aluminum tank that contained borated water. The water height for each loading was exactly 145 cm, and the soluble boron concentration in the water was adjusted until the configuration was slightly supercritical, with a value of 1.0007 for keff. Pin-by-pin power distributions were measured for several of the loadings
Calculations to an IAHR-benchmark test using the CFD-code CFX-4
Energy Technology Data Exchange (ETDEWEB)
Krepper, E.
1998-10-01
The calculation concerns a test, which was defined as a benchmark for 3-D codes by the working group of advanced nuclear reactor types of IAHR (International Association of Hydraulic Research). The test is well documented and detailed measuring results are available. The test aims at the investigation of phenomena, which are important for heat removal at natural circulation conditions in a nuclear reactor. The task for the calculation was the modelling of the forced flow field of a single phase incompressible fluid with consideration of heat transfer and influence of gravity. These phenomena are typical also for other industrial processes. The importance of correct modelling of these phenomena also for other applications is a motivation for performing these calculations. (orig.)
Experimental Data from the Benchmark SuperCritical Wing Wind Tunnel Test on an Oscillating Turntable
Heeg, Jennifer; Piatak, David J.
2013-01-01
The Benchmark SuperCritical Wing (BSCW) wind tunnel model served as a semi-blind testcase for the 2012 AIAA Aeroelastic Prediction Workshop (AePW). The BSCW was chosen as a testcase due to its geometric simplicity and flow physics complexity. The data sets examined include unforced system information and forced pitching oscillations. The aerodynamic challenges presented by this AePW testcase include a strong shock that was observed to be unsteady for even the unforced system cases, shock-induced separation and trailing edge separation. The current paper quantifies these characteristics at the AePW test condition and at a suggested benchmarking test condition. General characteristics of the model's behavior are examined for the entire available data set.
Validation of the Monteburns code for criticality calculation of TRIGA reactors
Energy Technology Data Exchange (ETDEWEB)
Dalle, Hugo Moura [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN), Belo Horizonte, MG (Brazil); Jeraj, Robert [Jozef Stafan Institute, Ljubljana (Slovenia)
2002-07-01
Use of Monte Carlo methods in burnup calculations of nuclear fuel has become practical due to increased speed of computers. Monteburns is an automated computational tool that links the Monte Carlo code MCNP with the burnup and decay code ORIGEN2.1. This code system was used to simulate a criticality benchmark experiment with burned fuel on a TRIGA Mark II research reactor. Two core configurations were simulated and k{sub eff} values calculated. The comparison between the calculated and experimental values shows good agreement, which indicates that the MCNP/Monteburns/ORIGEN2.1 system gives reliable results for neutronic simulations of TRIGA reactors. (author)
Energy Technology Data Exchange (ETDEWEB)
Kozier, K. S. [Atomic Energy of Canada Limited, Chalk River Laboratories, Chalk River, Ont. K0J 1J0 (Canada)
2006-07-01
This paper examines the sensitivity of MCNP5 k{sub eff} results to various deuterium data files for a simple benchmark problem consisting of an 8.4-cm radius sphere of uranium surrounded by an annulus of deuterium at the nuclide number density corresponding to heavy water. This study was performed to help clarify why {Delta}k{sub eff} values of about 10 mk are obtained when different ENDF/B deuterium data files are used in simulations of critical experiments involving solutions of high-enrichment uranyl fluoride in heavy water, while simulations of low-leakage, heterogeneous critical lattices of natural-uranium fuel rods in heavy water show differences of <1 mk. The benchmark calculations were performed as a function of deuterium reflector thickness for several uranium compositions using deuterium ACE files derived from ENDF/B-VII.b1 (release beta 1), ENDF/B-VI.4 and JENDL-3.3, which differ primarily in the energy/angle distributions for elastic scattering <3.2 MeV. Calculations were also performed using modified ACE files having equiprobable cosine bin values in the centre-of-mass reference frame in a progressive manner with increasing energy. It was found that the {Delta}k{sub eff} values increased with deuterium reflector thickness and uranium enrichment. The studies using modified ACE files indicate that most of the reactivity differences arise at energies <1 MeV; hence, this energy range should be given priority if new scattering distribution measurements are undertaken. (authors)
Energy Technology Data Exchange (ETDEWEB)
Le Corre, J.M.; Adamsson, C.; Alvarez, P., E-mail: lecorrjm@westinghouse.com, E-mail: carl.adamsson@psi.ch, E-mail: alvarep@westinghouse.com [Westinghouse Electric Sweden AB (Sweden)
2011-07-01
A benchmark analysis of the transient BFBT data [1], measured in an 8x8 fuel assembly design under typical BWR transient conditions, was performed using the VIPRE-W/MEFISTO-T code package. This is a continuation of the BFBT steady-state benchmark activities documented in [2] and [3]. All available transient void and pressure drop experimental data were considered and the measurements were compared with the predictions of the VIPRE-W sub-channel analysis code using various modeling approaches, including the EPRI drift flux void correlation. Detailed analyses of the code results were performed and it was demonstrated that the VIPRE-W transient predictions are generally reliable over the tested conditions. Available transient dryout data were also considered and the measurements were compared with the predictions of the VIPRE-W/ MEFISTO-T film flow calculations. The code calculates the transient multi-film flowrate distributions in the BFBT bundle, including the effect of spacer grids on drop deposition enhancement, and the dryout criterion corresponds to the total liquid film disappearance. After calibration of the grid enhancement effect with a very small subset of the steady-state critical power database, the code could predict the time and location of transient dryout with very good accuracy. (author)
International Nuclear Information System (INIS)
The reliability of calculation tools to evaluate and calculate dose rates appearing behind multi-layered shields is important with regard to the certification of transport and storage casks. Actual benchmark databases like SINBAD do not offer such configurations because they were developed for reactor and accelerator purposes. Due to this, a bench-mark-suite based on own experiments that contain dose rates measured in different distances and levels from a transport and storage cask and on a public benchmark to validate Monte-Carlo-transport-codes has been developed. The analysed and summarised experiments include a 60Co point-source located in a cylindrical cask, a 252Cf line-source shielded by iron and polyethylene (PE) and a bare 252Cf source moderated by PE in a concrete-labyrinth with different inserted shielding materials to quantify neutron streaming effects on measured dose rates. In detail not only MCNPTM (version 5.1.6) but also MAVRIC, included in the SCALE 6.1 package, have been compared for photon and neutron transport. Aiming at low deviations between calculation and measurement requires precise source term specification and exact measurements of the dose rates which have been evaluated carefully including known uncertainties. In MAVRIC different source-descriptions with respect to the group-structure of the nuclear data library are analysed for the calculation of gamma dose rates because the energy lines of 60Co can only be modelled in groups. In total the comparison shows that MCNPTM fits very wall to the measurements within up to two standard deviations and that MAVRIC behaves similarly under the prerequisite that the source-model can be optimized. (author)
Benchmark calculation of APOLLO-2 and SLAROM-UF in a fast reactor lattice
International Nuclear Information System (INIS)
A lattice cell benchmark calculation is carried out for APOLLO2 and SLAROM-UF on the infinite lattice of a simple pin cell featuring a fast reactor. The accuracy in k-infinity and reaction rates is investigated in their reference and standard level calculations. In the 1. reference level calculation, APOLLO2 and SLAROM-UF agree with the reference value of k-infinity obtained by a continuous energy Monte Carlo calculation within 50 pcm. However, larger errors are observed in a particular reaction rate and energy range. The major problem common to both codes is in the cross section library of 239Pu in the unresolved energy range. In the 2. reference level calculation, which is based on the ECCO 1968 group structure, both results of k-infinity agree with the reference value within 100 pcm. The resonance overlap effect is observed by several percents in cross sections of heavy nuclides. In the standard level calculation based on the APOLLO2 library creation methodology, a discrepancy appears by more than 300 pcm. A restriction is revealed in APOLLO2. Its standard cross section library does not have a sufficiently small background cross section to evaluate the self shielding effect on 56Fe cross sections. The restriction can be removed by introducing the mixture self-shielding treatment recently introduced to APOLLO2. SLAROM-UF original standard level calculation based on the JFS-3 library creation methodology is the best among the standard level calculations. Improvement from the SLAROM-UF standard level calculation is achieved mainly by use of a proper weight function for light or intermediate nuclides. (author)
Imachi, Hiroto
2015-01-01
Optimally hybrid numerical solvers were constructed for massively parallel generalized eigenvalue problem (GEP).The strong scaling benchmark was carried out on the K computer and other supercomputers for electronic structure calculation problems in the matrix sizes of M = 10^4-10^6 with upto 105 cores. The procedure of GEP is decomposed into the two subprocedures of the reducer to the standard eigenvalue problem (SEP) and the solver of SEP. A hybrid solver is constructed, when a routine is chosen for each subprocedure from the three parallel solver libraries of ScaLAPACK, ELPA and EigenExa. The hybrid solvers with the two newer libraries, ELPA and EigenExa, give better benchmark results than the conventional ScaLAPACK library. The detailed analysis on the results implies that the reducer can be a bottleneck in next-generation (exa-scale) supercomputers, which indicates the guidance for future research. The code was developed as a middleware and a mini-application and will appear online.
Comparisons of the MCNP criticality benchmark suite with ENDF/B-VI.8, JENDL-3.3, and JEFF-3.0
International Nuclear Information System (INIS)
A comparative study has been performed with the latest evaluated nuclear data libraries ENDF/B-VI.8, JENDL-3.3, and JEFF-3.0. The study has been conducted through the benchmark calculations for 91 criticality problems with the libraries processed for MCNP4C. The calculation results have been compared with those of the ENDF60 library. The self-shielding effects of the unresolved-resonance (UR) probability tables have also been estimated for each library. The χ2 differences between the MCNP results and experimental data were calculated for the libraries. (author)
International Nuclear Information System (INIS)
Highlights: • We analize the performance of neutron scattering libraries for D and O in D2O for nuclear criticality calculations. • We calculated 65 ICSBEP benchmark cases from 8 heavy water moderated thermal systems using MCNP5. • A significant improvement is found when our library is combined with the ROSFOND-2010 evaluation for deuterium. • In 48 of the 65 benchmark cases we obtained a C/E ratio closer to 1.0. • The percentage of benchmark cases calculated within 1-sigma increases from 42% to 82%, compared to ENDF/B-VII calculations. - Abstract: To improve the evaluations in thermal sublibraries, we developed a set of thermal neutron scattering cross sections (scattering kernels) for the deuterium and oxygen bound in heavy water in the ENDF-6 format. These new libraries are based on molecular dynamics simulations and recent experimental data, and result in an improvement of the calculation of single neutron scattering quantities. In this work, we show how the use of this new set of cross sections also improves the calculation of thermal critical systems moderated and/or reflected with heavy water. The use of the new thermal scattering library for heavy water, combined with the ROSFOND-2010 evaluation of the deuterium cross sections, results in an improvement of the C/E ratio in 48 out of 65 benchmark cases calculated with the Monte Carlo code MCNP5, in comparison with the existing library based on the ENDF/B-VII evaluation
An Analytical Benchmark for the Calculation of Current Distribution in Superconducting Cables
Bottura, L; Fabbri, M G
2002-01-01
The validation of numerical codes for the calculation of current distribution and AC loss in superconducting cables versus experimental results is essential, but could be affected by approximations in the electromagnetic model or incertitude in the evaluation of the model parameters. A preliminary validation of the codes by means of a comparison with analytical results can therefore be very useful, in order to distinguish among different error sources. We provide here a benchmark analytical solution for current distribution that applies to the case of a cable described using a distributed parameters electrical circuit model. The analytical solution of current distribution is valid for cables made of a generic number of strands, subjected to well defined symmetry and uniformity conditions in the electrical parameters. The closed form solution for the general case is rather complex to implement, and in this paper we give the analytical solutions for different simplified situations. In particular we examine the ...
IAEA GT-MHR Benchmark Calculations Using the HELIOS/MASTER Two-Step Procedure
Energy Technology Data Exchange (ETDEWEB)
Lee, Kyung Hoon; Kim, Kang Seog; Cho, Jin Young; Song, Jae Seung; Noh, Jae Man; Lee, Chung Chan; Zee, Sung Quun
2007-05-15
A new two-step procedure based on the HELISO/MASTER code system has been developed for the prismatic VHTR physics analysis. This procedure employs the HELIOS code for the transport lattice calculation to generate a few group constants, and the MASTER code for the 3-dimensional core calculation to perform the reactor physics analysis. Double heterogeneity effect due to the random distribution of the particulate fuel could be dealt with the recently developed reactivity-equivalent physical transformation (RPT) method. The strong spectral effects of the graphite moderated reactor core could be solved both by optimizing the number of energy groups and group boundaries, and by employing a partial core model instead of a single block one to generate a few group cross sections. Burnable poisons in the inner reflector and asymmetrically located large control rod can be treated by adopting the equivalence theory applied for the multi-block models to generate surface dependent discontinuity factors. Effective reflector cross sections were generated by using a simple mini-core model and an equivalence theory. In this study the IAEA GT-MHR benchmark problems with a plutonium fuel were analyzed by using the HELIOS/MASTER code package and the Monte Carlo code MCNP. Benchmark problems include pin, block and core models. The computational results of the HELIOS/MASTER code system were compared with those of MCNP and other participants. The results show that the 2-step procedure using HELIOS/MASTER can be applied to the reactor physics analysis for the prismatic VHTR with a good accuracy.
Concrete Spent Fuel Cask Criticality Calculation
International Nuclear Information System (INIS)
A preliminary analysis of the concrete cask for the intermediate dry storage of the spent fuel of NPP Krsko should include an estimation of the effective multiplication factor. Assuming 16x16 fuel elements, 4.3% initial enrichment, 45 GWd/tU burnup and 10 years cooling time, a concrete spent fuel capacity of 10 spent fuel assemblies is proposed. Fuel assemblies are placed inside inner cavity in a 'basket' - a boron (1%) doped steel structure. Heavy concrete (25% Fe), 45 cm thick, is enclosed in a carbon steel shell. There is also a stainless steel (SS304) lining of the storage cavity. Isotope inventory of the spent fuel after a 10 years cooling time is calculated using ORIGEN-S functional module of the SCALE-4.2 code package. The effective multiplication factor keff of dry (helium filled) and wet (water filled) cask for fresh and used fuel is calculated using CSAS4 Monte Carlo method based control module of the same SCALE-4.2 code package. The obtained results of keff of the dry cask for fresh and spent fuel are well below the required 0.95 value, but those for the water filled cask are above this value. Therefore, several additional calculations of the keff varying the thickness of a boral basket structure which had replaced the stainless steel one were done. It turned out that at least a 1.5 cm thick boral wall was needed to meet the required 0.95 value for keff. (author)
International Nuclear Information System (INIS)
The criticality analysis of the TRIGA-II benchmark experiment at the Musashi Institute of Technology Research Reactor (MuITR, 100kW) was performed by the three-dimensional continuous-energy Monte Carlo code (MCNP4A). To minimize errors due to an inexact geometry model, all fresh fuels and control rods as well as vicinity of the core were precisely modeled. Effective multiplication factors (keff) in the initial core critical experiment and in the excess reactivity adjustment for the several fuel-loading patterns as well as the fuel element reactivity worth distributions were used in the validation process of the physical model and neutron cross section data from the ENDF/B-V evaluation. The calculated keff overestimated the experimental data by about 1.0%Δk/k for both the initial core and the several fuel-loading arrangements (fuels or graphite elements were added only to the outer-ring), but the discrepancy increased to 1.8%Δk/k for the some fuel-loading patterns (graphite elements were inserted into the inner-ring). The comparison result of the fuel element worth distribution showed above tendency. All in all, the agreement between the MCNP predictions and the experimentally determined values is good, which indicates that the Monte Carlo model is enough to simulate criticality of the TRIGA-II reactor. (author)
Study on the conservative factors for burnup credit criticality calculation
International Nuclear Information System (INIS)
When applies the burnup credit technology to perform criticality safety analysis for spent fuel storage or transportation problems, it is important for one to confirm that all the conditions adopted are adequate to cover the severest conditions that may encounter in the engineering applications. Taking the OECD/NEA burnup credit criticality benchmarks as sample problems, we study the effect of some important factors that may affect the conservatism of' the results for spent fuel system criticality safety analysis. Effects caused by different nuclides credit strategy, different cooling time and axial burnup profile are studied by use of the STARBUCS module of SCALE5. 1 software package, and related conclusions about the conservatism of these factors are drawn. (authors)
Benchmark calculations on residue production within the EURISOL DS project. Part 1: thin targets
International Nuclear Information System (INIS)
We have begun this benchmark study using mass distribution data of reaction products obtained at GSI in inverse kinematics. This step has allowed us to make a first selection among 10 spallation models; in this way the first assessment of the quality of the models was obtained. Then, in a second part, experimental mass distributions for some elements, which either are interesting as radioactive ion beams or important due to the safety and radioprotection issues (alpha or gamma emitters), will be also compared to model calculations. These data have been obtained for an equivalent 0.8 or 1.0 GeV proton beam, which is approximately the proposed projectile energy. We note that in realistic thick targets the proton beam will be slowed down and some secondary particles will be produced. Therefore, the residual nuclei production at lower energies is also important. For this reason, we also performed in the third part of this work some excitation function calculations and the associated data obtained with gamma-spectroscopy to test the models in a wide projectile energy range. We conclude that INCL4/Abla and Isabel/Abla are the best model combinations which we recommend. We also note that the agreement between model and data are better with 1 GeV protons than with 100-200 MeV protons
Verification of HELIOS-MASTER system through benchmark of critical experiments
Energy Technology Data Exchange (ETDEWEB)
Kim, Ha Yong; Kim, Kyo Yun; Cho, Byung Oh; Lee, Chung Chan; Zee, Sung Quun
1999-03-01
HELIOS-MASTER code system is verified through the benchmark of the critical experiments that were performed by RRC Kurchatov Institute with water moderated hexagonally pitched lattices of highly enriched uranium fuel rods (80w/o). We also used the same input by using MCNP code that was described in evaluation report, and compare our results with those of evaluation report. HELIOS developed by Scandpower A/S is a two-dimensional transport program for generation of group cross sections and MASTER developed by KAERI is a three-dimensional nuclear design and analysis code based on the two group diffusion theory. It solves neutronics model with AFEN (Analytic Function Expansion Nodal) method for hexagonal geometry. The results show that HELIOS-MASTER code system is fast and accurate enough so that this code system can be used as nuclear core analysis tool for hexagonal geometry. (author). 4 refs., 4 tabs., 10 figs.
Verification of HELIOS-MASTER system through benchmark of critical experiments
International Nuclear Information System (INIS)
HELIOS-MASTER code system is verified through the benchmark of the critical experiments that were performed by RRC Kurchatov Institute with water moderated hexagonally pitched lattices of highly enriched uranium fuel rods (80w/o). We also used the same input by using MCNP code that was described in evaluation report, and compare our results with those of evaluation report. HELIOS developed by Scandpower A/S is a two-dimensional transport program for generation of group cross sections and MASTER developed by KAERI is a three-dimensional nuclear design and analysis code based on the two group diffusion theory. It solves neutronics model with AFEN (Analytic Function Expansion Nodal) method for hexagonal geometry. The results show that HELIOS-MASTER code system is fast and accurate enough so that this code system can be used as nuclear core analysis tool for hexagonal geometry. (author). 4 refs., 4 tabs., 10 figs
Diffusion benchmark calculations of a VVER-440 core with 180 deg symmetry
International Nuclear Information System (INIS)
A diffusion benchmark of the VVER-440 core with 180 deg symmetry and fixed cross sections is proposed. The new benchmark is the modification of Seidel's 3-dimensional 30 degree benchmark, which plays an important role in the verification and validation of nodal neutronic codes. In the new benchmark the 180 deg symmetry is assured by a stuck eccentric control assembly. The recommended reference solution is derived from diverse solutions of the DIF3D finite difference code. The results of the HEXAN module of the KARATE code system are also presented. (author)
Camps, Peter; Bianchi, Simone; Lunttila, Tuomas; Pinte, Christophe; Natale, Giovanni; Juvela, Mika; Fischera, Joerg; Fitzgerald, Michael P; Gordon, Karl; Baes, Maarten; Steinacker, Juergen
2015-01-01
We define an appropriate problem for benchmarking dust emissivity calculations in the context of radiative transfer (RT) simulations, specifically including the emission from stochastically heated dust grains. Our aim is to provide a self-contained guide for implementors of such functionality, and to offer insights in the effects of the various approximations and heuristics implemented by the participating codes to accelerate the calculations. The benchmark problem definition includes the optical and calorimetric material properties, and the grain size distributions, for a typical astronomical dust mixture with silicate, graphite and PAH components; a series of analytically defined radiation fields to which the dust population is to be exposed; and instructions for the desired output. We process this problem using six RT codes participating in this benchmark effort, and compare the results to a reference solution computed with the publicly available dust emission code DustEM. The participating codes implement...
Quantum computing applied to calculations of molecular energies: CH2 benchmark.
Veis, Libor; Pittner, Jiří
2010-11-21
Quantum computers are appealing for their ability to solve some tasks much faster than their classical counterparts. It was shown in [Aspuru-Guzik et al., Science 309, 1704 (2005)] that they, if available, would be able to perform the full configuration interaction (FCI) energy calculations with a polynomial scaling. This is in contrast to conventional computers where FCI scales exponentially. We have developed a code for simulation of quantum computers and implemented our version of the quantum FCI algorithm. We provide a detailed description of this algorithm and the results of the assessment of its performance on the four lowest lying electronic states of CH(2) molecule. This molecule was chosen as a benchmark, since its two lowest lying (1)A(1) states exhibit a multireference character at the equilibrium geometry. It has been shown that with a suitably chosen initial state of the quantum register, one is able to achieve the probability amplification regime of the iterative phase estimation algorithm even in this case.
Energy Technology Data Exchange (ETDEWEB)
Oigawa, Hiroyuki; Iijima, Susumu; Sakurai, Takeshi; Okajima, Shigeaki; Andoh, Masaki; Nemoto, Tatsuo; Kato, Yuichi; Osugi, Toshitaka [Dept. of Nuclear Energy System, Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan)
2000-02-01
In order to assess the validity of the cross section library for fast reactor physics, a set of benchmark calculation is proposed. The benchmark calculation is based upon mock-up experiments at three FCA cores with various compositions of central test regions, two of which were mock-ups of metallic fueled LMFBR's, and the other was a mock-up of a mixed oxide fueled LMFBR. One of the metallic cores included enriched uranium in the test region, while the others did not. Physics parameters to be calculated are criticality, reaction rate ratios, plutonium and B{sub 4}C sample worth, sodium void reactivity worth, and Doppler reactivity worth of {sup 238}U. Homogenized atomic number densities and various correction factors are given so that anyone can easily perform diffusion calculation in two-dimensional RZ-model and compare the results with the experiments. The validity of the correction factors are proved by changing the calculation method and used nuclear data file. (author)
Technique for technological calculation of critical flow of boiling water
International Nuclear Information System (INIS)
Average values of friction factor and mach number for a critical flow of boiling water are determined on the basis of computerized processing of experimental data. Empirical formula, relating these values, which can be used for technological calculations of critical conditions of boiling water flow through transport pipelines, is derived
Diffusion benchmark calculations of a WWER-440 core with 180 deg symmetry
International Nuclear Information System (INIS)
A diffusion benchmark of the VVER-440 core with 180 degree symmetry and fixed cross sections is proposed. The new benchmark is the modification of Seidel's 3 dimensional 30 degree benchmark, which plays an important role in the verification and validation of nodal neutronic codes. In the 180 degree symmetry is assured by a stuck eccentric control assembly. The recommended reference solution is derived from diverse solution of the DIF3D finite difference code. The results of the HEXAN module of the KARATE code system are also presented.(Authors)
International Nuclear Information System (INIS)
To justify the use of a commercial Computational Fluid Dynamics (CFD) code for a CANDU fuel channel analysis, especially for the radiation heat transfer dominant conditions, the CFX-10 code is tested against three benchmark problems which were used for the validation of a radiation heat transfer in the CANDU analysis code, a CATHENA. These three benchmark problems are representative of the CANDU fuel channel configurations from a simple geometry to a whole fuel channel geometry. For the solutions of the benchmark problems, the temperature or the net radiation heat flux boundary conditions are prescribed for each radiating surface to determine the radiation heat transfer rate or the surface temperature, respectively by using the network method. The Discrete Transfer Model (DTM) is used for the CFX-10 radiation model and its calculation results are compared with the solutions of the benchmark problems. The CFX-10 results for the three benchmark problems are in close agreement with those solutions, so it is concluded that the CFX-10 with a DTM radiation model can be applied to the CANDU fuel channel analysis where a surface radiation heat transfer is a dominant mode of the heat transfer. (author)
Additional nuclear criticality safety calculations for small-diameter containers
International Nuclear Information System (INIS)
This report documents additional criticality safety analysis calculations for small diameter containers, which were originally documented in Reference 1. The results in Reference 1 indicated that some of the small diameter containers did not meet the criteria established for criticality safety at the Portsmouth facility (Keff +2σ<.95) when modeled under various contingency assumptions of reflection and moderation. The calculations performed in this report reexamine those cases which did not meet the criticality safety criteria. In some cases, unnecessary conservatism is removed, and in other cases mass or assay limits are established for use with the respective containers
The calculational VVER burnup Credit Benchmark No.3 results with the ENDF/B-VI rev.5 (1999)
Energy Technology Data Exchange (ETDEWEB)
Rodriguez Gual, Maritza [Centro de Tecnologia Nuclear, La Habana (Cuba). E-mail: mrgual@ctn.isctn.edu.cu
2000-07-01
The purpose of this papers to present the results of CB3 phase of the VVER calculational benchmark with the recent evaluated nuclear data library ENDF/B-VI Rev.5 (1999). This results are compared with the obtained from the other participants in the calculations (Czech Republic, Finland, Hungary, Slovaquia, Spain and the United Kingdom). The phase (CB3) of the VVER calculation benchmark is similar to the Phase II-A of the OECD/NEA/INSC BUC Working Group benchmark for PWR. The cases without burnup profile (BP) were performed with the WIMS/D-4 code. The rest of the cases have been carried with DOTIII discrete ordinates code. The neutron library used was the ENDF/B-VI rev. 5 (1999). The WIMS/D-4 (69 groups) is used to collapse cross sections from the ENDF/B-VI Rev. 5 (1999) to 36 groups working library for 2-D calculations. This work also comprises the results of CB1 (obtained with ENDF/B-VI rev. 5 (1999), too) and CB3 for cases with Burnup of 30 MWd/TU and cooling time of 1 and 5 years and for case with Burnup of 40 MWd/TU and cooling time of 1 year. (author)
Criticality safety calculations of the Soreq research reactor storage pool
Energy Technology Data Exchange (ETDEWEB)
Caner, M.; Hirshfeld, H.; Nagler, A.; Silverman, I.; Bettan, M. [Soreq Nuclear Research Center, Yavne 81800 (Israel); Levine, S.H. [Penn State University, University Park 16802 (United States)
2001-07-01
The IRR-l spent fuel is to be relocated in a storage pool. The present paper describes the actual facility and summarizes the Monte Carlo criticality safety calculations. The fuel elements are to be placed inside cadmium boxes to reduce their reactivity. The fuel element is 7.6 cm by 8.0 cm in the horizontal plane. The cadmium box is effectively 9.7 cm by 9.7 cm, providing significant water between the cadmium and the fuel element. The present calculations show that the spent fuel storage pool is criticality safe even for fresh fuel elements. (author)
International Nuclear Information System (INIS)
In this article are compared theoretical results by new version of the SCALE5 code with experiments or other theoretical calculation for: 1. criticality: -measurement on ZR-6 and LR-0; - numerical benchmark No. 1.3 and 4 (CB1, CB3, CB4); 2. nuclide compositions: - measurement in Kurchatov institute for 3,6 %; - measurement in JAERI(PWR 17x17); - numerical benchmark No. 2-Source (CB2); 3. sources and decay heat:- numerical benchmark No.2-Source (CB2-S); The focus is on modules KENO, TRITON and ORIGEN-S (Authors)
International Nuclear Information System (INIS)
The quantum Monte Carlo (QMC) technique is used to generate accurate energy benchmarks for methane-water clusters containing a single methane monomer and up to 20 water monomers. The benchmarks for each type of cluster are computed for a set of geometries drawn from molecular dynamics simulations. The accuracy of QMC is expected to be comparable with that of coupled-cluster calculations, and this is confirmed by comparisons for the CH4-H2O dimer. The benchmarks are used to assess the accuracy of the second-order Møller-Plesset (MP2) approximation close to the complete basis-set limit. A recently developed embedded many-body technique is shown to give an efficient procedure for computing basis-set converged MP2 energies for the large clusters. It is found that MP2 values for the methane binding energies and the cohesive energies of the water clusters without methane are in close agreement with the QMC benchmarks, but the agreement is aided by partial cancelation between 2-body and beyond-2-body errors of MP2. The embedding approach allows MP2 to be applied without loss of accuracy to the methane hydrate crystal, and it is shown that the resulting methane binding energy and the cohesive energy of the water lattice agree almost exactly with recently reported QMC values
Energy Technology Data Exchange (ETDEWEB)
Gillan, M. J., E-mail: m.gillan@ucl.ac.uk [London Centre for Nanotechnology, University College London, Gordon St., London WC1H 0AH (United Kingdom); Department of Physics and Astronomy, University College London, Gower St., London WC1E 6BT (United Kingdom); Thomas Young Centre, University College London, Gordon St., London WC1H 0AH (United Kingdom); Alfè, D. [London Centre for Nanotechnology, University College London, Gordon St., London WC1H 0AH (United Kingdom); Department of Physics and Astronomy, University College London, Gower St., London WC1E 6BT (United Kingdom); Thomas Young Centre, University College London, Gordon St., London WC1H 0AH (United Kingdom); Department of Earth Sciences, University College London, Gower St., London WC1E 6BT (United Kingdom); Manby, F. R. [Centre for Computational Chemistry, School of Chemistry, University of Bristol, Bristol BS8 1TS (United Kingdom)
2015-09-14
The quantum Monte Carlo (QMC) technique is used to generate accurate energy benchmarks for methane-water clusters containing a single methane monomer and up to 20 water monomers. The benchmarks for each type of cluster are computed for a set of geometries drawn from molecular dynamics simulations. The accuracy of QMC is expected to be comparable with that of coupled-cluster calculations, and this is confirmed by comparisons for the CH{sub 4}-H{sub 2}O dimer. The benchmarks are used to assess the accuracy of the second-order Møller-Plesset (MP2) approximation close to the complete basis-set limit. A recently developed embedded many-body technique is shown to give an efficient procedure for computing basis-set converged MP2 energies for the large clusters. It is found that MP2 values for the methane binding energies and the cohesive energies of the water clusters without methane are in close agreement with the QMC benchmarks, but the agreement is aided by partial cancelation between 2-body and beyond-2-body errors of MP2. The embedding approach allows MP2 to be applied without loss of accuracy to the methane hydrate crystal, and it is shown that the resulting methane binding energy and the cohesive energy of the water lattice agree almost exactly with recently reported QMC values.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
International Nuclear Information System (INIS)
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
TRIGA criticality experiment for testing burn-up calculations
Energy Technology Data Exchange (ETDEWEB)
Persic, Andreja; Ravnik, Matjaz; Zagar, Tomaz [Jozef Stefan Institute, Reactor Physics Division, Ljubljana (Slovenia)
1999-07-01
A criticality experiment with partly burned TRIGA fuel is described. 20 wt % enriched standard TRIGA fuel elements initially containing 12 wt % U are used. Their average burn-up is 1.4 MWd. Fuel element burn-up is calculated in 2-D four group diffusion approximation using TRIGLAV code. The burn-up of several fuel elements is also measured by reactivity method. The excess reactivity of several critical and subcritical core configurations is measured. Two core configurations contain the same fuel elements in the same arrangement as were used in the fresh TRIGA fuel criticality experiment performed in 1991. The results of the experiment may be applied for testing the computer codes used for fuel burn-up calculations. (author)
TRIGA FUEL PHASE I AND II CRITICALITY CALCULATION
Energy Technology Data Exchange (ETDEWEB)
L. Angers
1999-11-23
The purpose of this calculation is to characterize the criticality aspect of the codisposal of TRIGA (Training, Research, Isotopes, General Atomic) reactor spent nuclear fuel (SNF) with Savannah River Site (SRS) high-level waste (HLW). The TRIGA SNF is loaded into a Department of Energy (DOE) standardized SNF canister which is centrally positioned inside a five-canister defense SRS HLW waste package (WP). The objective of the calculation is to investigate the criticality issues for the WP containing the five SRS HLW and DOE SNF canisters in various stages of degradation. This calculation will support the analysis that will be performed to demonstrate the viability of the codisposal concept for the Monitored Geologic Repository (MGR).
Energy Technology Data Exchange (ETDEWEB)
Joo, Hyung Kook; Noh, Jae Man; Lee, Hyung Chul; Yoo, Jae Woon
2006-01-15
In this report, we verified the NUREC code transient calculation capability using OECD NEA/US NRC PWR MOX/UO2 Core Transient Benchmark Problem. The benchmark problem consists of Part 1, a 2-D problem with given T/H conditions, Part 2, a 3-D problem at HFP condition, Part 3, a 3-D problem at HZP condition, and Part 4, a transient state initiated by a control rod ejection at HZP condition in Part 3. In Part 1, the results of NUREC code agreed well with the reference solution obtained from DeCART calculation except for the pin power distributions at the rodded assemblies. In Part 2, the results of NUREC code agreed well with the reference DeCART solutions. In Part 3, some results of NUREC code such as critical boron concentration and core averaged delayed neutron fraction agreed well with the reference PARCS 2G solutions. But the error of the assembly power at the core center was quite large. The pin power errors of NUREC code at the rodded assemblies was much smaller the those of PARCS code. The axial power distribution also agreed well with the reference solution. In Part 4, the results of NUREC code agreed well with those of PARCS 2G code which was taken as the reference solution. From the above results we can conclude that the results of NUREC code for steady states and transient states of the MOX loaded LWR core agree well with those of the other codes.
Criticality calculations with MCNP{sup TM}: A primer
Energy Technology Data Exchange (ETDEWEB)
Mendius, P.W. [ed.; Harmon, C.D. II; Busch, R.D.; Briesmeister, J.F.; Forster, R.A.
1994-08-01
The purpose of this Primer is to assist the nuclear criticality safety analyst to perform computer calculations using the Monte Carlo code MCNP. Because of the closure of many experimental facilities, reliance on computer simulation is increasing. Often the analyst has little experience with specific codes available at his/her facility. This Primer helps the analyst understand and use the MCNP Monte Carlo code for nuclear criticality analyses. It assumes no knowledge of or particular experience with Monte Carlo codes in general or with MCNP in particular. The document begins with a Quickstart chapter that introduces the basic concepts of using MCNP. The following chapters expand on those ideas, presenting a range of problems from simple cylinders to 3-dimensional lattices for calculating keff confidence intervals. Input files and results for all problems are included. The Primer can be used alone, but its best use is in conjunction with the MCNP4A manual. After completing the Primer, a criticality analyst should be capable of performing and understanding a majority of the calculations that will arise in the field of nuclear criticality safety.
Calculation of Critical Values for Somerville's FDR Procedures
Directory of Open Access Journals (Sweden)
Paul N. Somerville
2007-04-01
Full Text Available A Fortran 95 program has been written to calculate critical values for the step-up and step-down FDR procedures developed by Somerville (2004. The program allows for arbitrary selection of number of hypotheses, FDR rate, one- or two-sided hypotheses, common correlation coefficient of the test statistics and degrees of freedom. An MCV (minimum critical value may be specified, or the program will calculate a specified number of critical values or steps in an FDR procedure. The program can also be used to efficiently ascertain an upper bound to the number of hypotheses which the procedure will reject, given either the values of the test statistics, or their p values. Limiting the number of steps in an FDR procedure can be used to control the number or proportion of false discoveries (Somerville and Hemmelmann 2007. Using the program to calculate the largest critical values makes possible efficient use of the FDR procedures for very large numbers of hypotheses
International Nuclear Information System (INIS)
The neutron generation time Λ plays an important role in the reactor kinetics. However, it is not straightforward nor standard in most continuous energy Monte Carlo codes which are able to calculate the prompt neutron lifetime lp directly. The difference between Λ and lp are sometimes very apparent. As very few delayed neutrons are produced in the reactor, they have little influence on Λ. Thus on the assumption that no delayed neutrons are produced in the system, the prompt kinetics equations for critical system and subcritical system with an external source are proposed. And then the equations are applied to calculating Λ with pulsed neutron technique using Monte Carlo. Only one fission neutron source is simulated with Monte Carlo in critical system while two neutron sources, including a fission source and an external source, are simulated for subcritical system. Calculations are performed on both critical benchmarks and subcritical system with an external source and the results are consistent with the reference values. (author)
Recent R and D around the Monte-Carlo code Tripoli-4 for criticality calculation
Energy Technology Data Exchange (ETDEWEB)
Hugot, F.X.; Lee, Y.K.; Malvagi, F. [CEA - DEN/DANS/DM2S/SERMA/LTSD, Saclay (France)
2008-07-01
TRIPOLI-4 [1] is the fourth generation of the TRIPOLI family of Monte Carlo codes developed from the 60's by CEA. It simulates the 3D transport of neutrons, photons, electrons and positrons as well as coupled neutron-photon propagation and electron-photons cascade showers. The code addresses radiation protection and shielding problems, as well as criticality and reactor physics problems through both critical and subcritical neutronics calculations. It uses full pointwise as well as multigroup cross-sections. The code has been validated through several hundred benchmarks as well as measurement campaigns. It is used as a reference tool by CEA as well as its industrial and institutional partners, and in the NURESIM [2] European project. Section 2 reviews its main features, with emphasis on the latest developments. Section 3 presents some recent R and D for criticality calculations. Fission matrix, Eigen-values and eigenvectors computations will be exposed. Corrections on the standard deviation estimator in the case of correlations between generation steps will be detailed. Section 4 presents some preliminary results obtained by the new mesh tally feature. The last section presents the interest of using XML format output files. (authors)
Benchmark calculation of p-3H and n-3He scattering
Viviani, M; Lazauskas, R; Fonseca, A C; Kievsky, A; Marcucci, L E
2016-01-01
p-3H and n-3He scattering in the energy range above the n-3He but below the d-d thresholds is studied by solving the 4-nucleon problem with a realistic nucleon-nucleon interaction. Three different methods -- Alt, Grassberger and Sandhas, Hyperspherical Harmonics, and Faddeev-Yakubovsky -- have been employed and their results for both elastic and charge-exchange processes are compared. We observe a good agreement between the three different methods, thus the obtained results may serve as a benchmark. A comparison with the available experimental data is also reported and discussed.
Energy Technology Data Exchange (ETDEWEB)
Chiang, Min-Han; Wang, Jui-Yu [Institute of Nuclear Engineering and Science, National Tsing Hua University, 101, Section 2, Kung-Fu Road, Hsinchu 30013, Taiwan (China); Sheu, Rong-Jiun, E-mail: rjsheu@mx.nthu.edu.tw [Institute of Nuclear Engineering and Science, National Tsing Hua University, 101, Section 2, Kung-Fu Road, Hsinchu 30013, Taiwan (China); Department of Engineering System and Science, National Tsing Hua University, 101, Section 2, Kung-Fu Road, Hsinchu 30013, Taiwan (China); Liu, Yen-Wan Hsueh [Institute of Nuclear Engineering and Science, National Tsing Hua University, 101, Section 2, Kung-Fu Road, Hsinchu 30013, Taiwan (China); Department of Engineering System and Science, National Tsing Hua University, 101, Section 2, Kung-Fu Road, Hsinchu 30013, Taiwan (China)
2014-05-01
The High Temperature Engineering Test Reactor (HTTR) in Japan is a helium-cooled graphite-moderated reactor designed and operated for the future development of high-temperature gas-cooled reactors. Two detailed full-core models of HTTR have been established by using SCALE6 and MCNP5/X, respectively, to study its neutronic properties. Several benchmark problems were repeated first to validate the calculation models. Careful code-to-code comparisons were made to ensure that two calculation models are both correct and equivalent. Compared with experimental data, the two models show a consistent bias of approximately 20–30 mk overestimation in effective multiplication factor for a wide range of core states. Most of the bias could be related to the ENDF/B-VII.0 cross-section library or incomplete modeling of impurities in graphite. After that, a series of systematic analyses was performed to investigate the effects of cross sections on the HTTR criticality and burnup calculations, with special interest in the comparison between continuous-energy and multigroup results. Multigroup calculations in this study were carried out in 238-group structure and adopted the SCALE double-heterogeneity treatment for resonance self-shielding. The results show that multigroup calculations tend to underestimate the system eigenvalue by a constant amount of ∼5 mk compared to their continuous-energy counterparts. Further sensitivity studies suggest the differences between multigroup and continuous-energy results appear to be temperature independent and also insensitive to burnup effects.
Energy Technology Data Exchange (ETDEWEB)
Gil, Soo Lee; Nam, Zin Cho [Korea Advanced Institute of Science and Technology, Dept. of Nuclear and Quantum Engineering, Yusong-gu, Daejeon (Korea, Republic of)
2005-07-01
The OECD 3-dimensional benchmark problem C5G7 MOX was calculated by the CRX code. For 3-dimensional heterogeneous calculation, the CRX code uses a fusion technique of 2-dimensional/1-dimensional methods: the method of characteristics for radial 2-dimensional calculation and diamond difference scheme (DD) that is an S{sub N}-like method for axial 1-dimensional calculation. We improve the fusion method by using a linear characteristics (LC) solver in the 1-dimensional calculation. Here, we present brief structure of 2-dimensional/1-dimensional fusion method and the results of 3 configurations of benchmark problem. We also present results of several different 1-dimensional calculation options. Numerical results show that the LC scheme presents better performance than DD. In the results of the benchmark problem, k(eff) errors are less than 0.05% and the averages of pin power errors are less than 1% for all calculations.
Criticality calculations for a critical assembly, graphite moderate, using 20% enriched uranium
International Nuclear Information System (INIS)
The construction of a Zero Power Reactor (ZPR) at the Instituto de Energia Atomica in order to measure the neutron characteristics (parameters) of HTGR reactors is proposed. The necessary quantity fissile uranium for these measurements has been calculed. Criticality studies of graphite moderated critical assemblies containing thorium have been made and the critical mass of each of several typical commercial HTGR compositions has been calculated using computer codes HAMMER and CITATION. Assemblies investigated contained a central cylindrical core region, simulating a typical commercial HTGR composition, a uranium-graphite driver region and a outer pure graphite reflector region. It is concluded that a 10Kg inventory of fissile uranium will be required for a program of measurements utilizing each of the several calculated assemblies
International Nuclear Information System (INIS)
Highlights: • Benchmark study performed for the neutronic calculations of TRIGA research reactors. • WIMSD-5B/CITATION is the utilized code system along with the WIMSD-IAEA-69 library. • The studied condensed spectra are five and seven energy groups spectra. • Analyzed: lattice parameters, reactivities, CR worth, flux and power distribution. • The lattice and neutronic parameters showed the accuracy of both condensed spectra. - Abstract: The objective of this paper is to assess the suitability and accuracy of the deterministic diffusion method for the neutronic calculations of the TRIGA Mark-III research reactors using the WIMSD/CITATION code system in proposed condensed energy spectra of five and seven energy groups with one and three thermal groups respectively. The utilized cell transport calculations code and core diffusion calculations code are the WIMSD-5B and the CITVAP v3.1 codes respectively, along with the WIMSD-IAEA-69 nuclear data library. Firstly, the assessment goes through analyzing the integral parameters – keff, ρ238, δ235, δ238, and C* – of the TRX and BAPL benchmark lattices and comparison with experimental and previous reference results using other ENDLs at the full energy spectra which show good agreement with the references at both spectra. Secondly, evaluation of the 3D nuclear characteristics of three different cores of the TRR-1/M1 TRIGA Mark-III Thai research reactor at the condensed energy spectra. The results include the excess reactivities of the cores and the worth of selected control rods which were compared with reference Monte Carlo results and experimental values. The results show good agreement with the references at both energy spectra and the better accuracy are attainable in the five energy groups spectrum. The results also include neutron flux distributions which are evaluated for future comparisons with other calculational techniques even they are comparable to reactors and fuels of the same type. The
Criticality calculations with MCNP{trademark}: A primer
Energy Technology Data Exchange (ETDEWEB)
Harmon, C.D. II; Busch, R.D.; Briesmeister, J.F.; Forster, R.A. [New Mexico Univ., Albuquerque, NM (United States)
1994-06-06
With the closure of many experimental facilities, the nuclear criticality safety analyst increasingly is required to rely on computer calculations to identify safe limits for the handling and storage of fissile materials. However, in many cases, the analyst has little experience with the specific codes available at his/her facility. This primer will help you, the analyst, understand and use the MCNP Monte Carlo code for nuclear criticality safety analyses. It assumes that you have a college education in a technical field. There is no assumption of familiarity with Monte Carlo codes in general or with MCNP in particular. Appendix A gives an introduction to Monte Carlo techniques. The primer is designed to teach by example, with each example illustrating two or three features of MCNP that are useful in criticality analyses. Beginning with a Quickstart chapter, the primer gives an overview of the basic requirements for MCNP input and allows you to run a simple criticality problem with MCNP. This chapter is not designed to explain either the input or the MCNP options in detail; but rather it introduces basic concepts that are further explained in following chapters. Each chapter begins with a list of basic objectives that identify the goal of the chapter, and a list of the individual MCNP features that are covered in detail in the unique chapter example problems. It is expected that on completion of the primer you will be comfortable using MCNP in criticality calculations and will be capable of handling 80 to 90 percent of the situations that normally arise in a facility. The primer provides a set of basic input files that you can selectively modify to fit the particular problem at hand.
Feldman, U.; Landi, E.; Doschek, G. A.
2006-10-01
The accuracy of available spectral codes is dependent on the quality of the atomic data and transition rates that they include, and can only be tested by benchmarking predicted line emissivities with observations from plasmas whose physical properties are known with precision. In the present work we describe a few high-resolution spectra emitted by solar flare plasmas under condition of ionization equilibrium, and one quiet Sun off-disk region spectrum, and we propose these datasets as benchmarks for the assessment of the accuracy of existing spectral codes in the 1.84-1.90 Å and 3.17-3.22 Å X-ray ranges and in the 500-1600 Å far ultraviolet range.
MUPO, Critical 43 Group Spectra Calculation for Homogeneous Reactor
International Nuclear Information System (INIS)
1 - Nature of physical problem solved: MUPO calculates the critical spectrum of a bare homogeneous reactor in 43 groups. This spectrum is used to evaluate condensed microscopic cross-sections. An option for this programme is to read in the library data from cards and write the binary library tape -DRAGON LIBRARY 3-. 2 - Method of solution: 3 options. Introduction of an additional absorber to account for a control poison, source iteration technique, and a buckling iteration. 3 - Restrictions on the complexity of the problem: 110 materials
Tanaka, Ken-ichi
2016-06-01
We performed benchmark calculation for radioactivity activated in a Primary Containment Vessel (PCV) of a Boiling Water Reactor (BWR) by using MAXS library, which was developed by collapsing with neutron energy spectra in the PCV of the BWR. Radioactivities due to neutron irradiation were measured by using activation foil detector of Gold (Au) and Nickel (Ni) at thirty locations in the PCV. We performed activation calculations of the foils with SCALE5.1/ORIGEN-S code with irradiation conditions of each foil location as the benchmark calculation. We compared calculations and measurements to estimate an effectiveness of MAXS library.
Directory of Open Access Journals (Sweden)
Tanaka Ken-ichi
2016-01-01
Full Text Available We performed benchmark calculation for radioactivity activated in a Primary Containment Vessel (PCV of a Boiling Water Reactor (BWR by using MAXS library, which was developed by collapsing with neutron energy spectra in the PCV of the BWR. Radioactivities due to neutron irradiation were measured by using activation foil detector of Gold (Au and Nickel (Ni at thirty locations in the PCV. We performed activation calculations of the foils with SCALE5.1/ORIGEN-S code with irradiation conditions of each foil location as the benchmark calculation. We compared calculations and measurements to estimate an effectiveness of MAXS library.
A definitive heat of vaporization of silicon through benchmark ab initio calculations on $SiF_{4}$
Martin, J M L; Martin, Jan M.L.; Taylor, Peter R.
1999-01-01
In order to resolve a significant uncertainty in the heat of vaporization of silicon -- a fundamental parameter in gas-phase thermochemistry -- $\\Delta H^\\circ_{f,0}$[Si(g)] has been determined from a thermochemical cycle involving the precisely known experimental heats of formation of SiF_4(g) and F(g) and a benchmark calculation of the total atomization energy (TAE_0) of SiF_4 using coupled-cluster methods. Basis sets up to $[8s7p6d4f2g1h]$ on Si and $[7s6p5d4f3g2h]$ on F have been employed, and extrapolations for residual basis set incompleteness applied. The contributions of inner-shell correlation (-0.08 kcal/mol), scalar relativistic effects (-1.88 kcal/mol), atomic spin-orbit splitting (-1.97 kcal/mol), and anharmonicity in the zero-point energy (+0.04 kcal/mol) have all been explicitly accounted for. Our benchmark TAE_0=565.89 kcal/mol ($\\Delta H^\\circ_{f,298}$[Si(g)]=108.19 \\pm 0.38 kcal/mol): between the JANAF/CODATA value of 106.5 \\pm 1.9 kcal/mol and the revised value proposed by Grev and Schaefer...
Energy Technology Data Exchange (ETDEWEB)
Li, M [Wayne State Univeristy, Detroit, MI (United States); Chetty, I [Henry Ford Health System, Detroit, MI (United States); Zhong, H [Henry Ford Hospital System, Detroit, MI (United States)
2014-06-01
Purpose: Tumor control probability (TCP) calculated with accumulated radiation doses may help design appropriate treatment margins. Image registration errors, however, may compromise the calculated TCP. The purpose of this study is to develop benchmark CT images to quantify registration-induced errors in the accumulated doses and their corresponding TCP. Methods: 4DCT images were registered from end-inhale (EI) to end-exhale (EE) using a “demons” algorithm. The demons DVFs were corrected by an FEM model to get realistic deformation fields. The FEM DVFs were used to warp the EI images to create the FEM-simulated images. The two images combined with the FEM DVF formed a benchmark model. Maximum intensity projection (MIP) images, created from the EI and simulated images, were used to develop IMRT plans. Two plans with 3 and 5 mm margins were developed for each patient. With these plans, radiation doses were recalculated on the simulated images and warped back to the EI images using the FEM DVFs to get the accumulated doses. The Elastix software was used to register the FEM-simulated images to the EI images. TCPs calculated with the Elastix-accumulated doses were compared with those generated by the FEM to get the TCP error of the Elastix registrations. Results: For six lung patients, the mean Elastix registration error ranged from 0.93 to 1.98 mm. Their relative dose errors in PTV were between 0.28% and 6.8% for 3mm margin plans, and between 0.29% and 6.3% for 5mm-margin plans. As the PTV margin reduced from 5 to 3 mm, the mean TCP error of the Elastix-reconstructed doses increased from 2.0% to 2.9%, and the mean NTCP errors decreased from 1.2% to 1.1%. Conclusion: Patient-specific benchmark images can be used to evaluate the impact of registration errors on the computed TCPs, and may help select appropriate PTV margins for lung SBRT patients.
DEFF Research Database (Denmark)
Mitzel, Jens; Gülzow, Erich; Kabza, Alexander;
2016-01-01
This paper is focused on the identification of critical parameters and on the development of reliable methodologies to achieve comparable benchmark results. Possibilities for control sensor positioning and for parameter variation in sensitivity tests are discussed and recommended options for the ......This paper is focused on the identification of critical parameters and on the development of reliable methodologies to achieve comparable benchmark results. Possibilities for control sensor positioning and for parameter variation in sensitivity tests are discussed and recommended options...... for the control strategy are summarized. This ensures result comparability as well as stable test conditions. E.g., the stack temperature fluctuation is minimized to about 1 °C. The experiments demonstrate that reactants pressures differ up to 12 kPa if pressure control positions are varied, resulting...... in an average cell voltage deviation of 21 mV. Test parameters simulating different stack applications are summarized. The stack demonstrated comparable average cell voltage of 0.63 V for stationary and portable conditions. For automotive conditions, the voltage increased to 0.69 V, mainly caused by higher...
Energy Technology Data Exchange (ETDEWEB)
Paratte, J.M. [Laboratory for Reactor Physics and Systems Behaviour (LRS), Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Frueh, R. [Ecole Polytechnique Federale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland); Kasemeyer, U. [Laboratory for Reactor Physics and Systems Behaviour (LRS), Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Kalugin, M.A. [Kurchatov Institute, 123182 Moscow (Russian Federation); Timm, W. [Framatome-ANP, D-91050 Erlangen (Germany); Chawla, R. [Laboratory for Reactor Physics and Systems Behaviour (LRS), Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Ecole Polytechnique Federale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland)
2006-05-15
Measurements in the CROCUS reactor at EPFL, Lausanne, are reported for the critical water level and the inverse reactor period for several different sets of delayed supercritical conditions. The experimental configurations were also calculated by four different calculation methods. For each of the supercritical configurations, the absolute reactivity value has been determined in two different ways, viz.: (i) through direct comparison of the multiplication factor obtained employing a given calculation method with the corresponding value for the critical case (calculated reactivity: {rho} {sub calc}); (ii) by application of the inhour equation using the kinetic parameters obtained for the critical configuration and the measured inverse reactor period (measured reactivity: {rho} {sub meas}). The calculated multiplication factors for the reference critical configuration, as well as {rho} {sub calc} for the supercritical cases, are found to be in good agreement. However, the values of {rho} {sub meas} produced by two of the applied calculation methods differ appreciably from the corresponding {rho} {sub calc} values, clearly indicating deficiencies in the kinetic parameters obtained from these methods.
Parma, Edward J.; Ball, Russell M.; Hoovler, Gary S.; Selcow, Elizabeth C.; Cerbone, Ralph J.
1993-01-01
A reactor designed to perform criticality experiments in support of the Space Nuclear Thermal Propulsion program is currently in operation at the Sandia National Laboratories' reactor facility. The reactor is a small, water-moderated system that uses highly enriched uranium particle fuel in a 19-element configuration. Its purpose is to obtain neutronic measurements under a variety of experimental conditions that are subsequently used to benchmark rector-design computer codes. Brookhaven National Laboratory, Babcock & Wilcox, and Sandia National Laboratories participated in determining the reactor's performance requirements, design, follow-on experimentation, and in obtaining the licensing approvals. Brookhaven National Laboratory is primarily responsible for the analytical support, Babcock & Wilcox the hardware design, and Sandia National Laboratories the operational safety. All of the team members participate in determining the experimentation requirements, performance, and data reduction. Initial criticality was achieved in October 1989. An overall description of the reactor is presented along with key design features and safety-related aspects.
Energy Technology Data Exchange (ETDEWEB)
Epifanovsky, Evgeny [Department of Chemistry, University of Southern California, Los Angeles, California 90089-0482 (United States); Department of Chemistry, University of California, Berkeley, California 94720 (United States); Q-Chem Inc., 6601 Owens Drive, Suite 105, Pleasanton, California 94588 (United States); Klein, Kerstin; Gauss, Jürgen [Institut für Physikalische Chemie, Universität Mainz, D-55099 Mainz (Germany); Stopkowicz, Stella [Department of Chemistry, Centre for Theoretical and Computational Chemistry, University of Oslo, N-0315 Oslo (Norway); Krylov, Anna I. [Department of Chemistry, University of Southern California, Los Angeles, California 90089-0482 (United States)
2015-08-14
We present a formalism and an implementation for calculating spin-orbit couplings (SOCs) within the EOM-CCSD (equation-of-motion coupled-cluster with single and double substitutions) approach. The following variants of EOM-CCSD are considered: EOM-CCSD for excitation energies (EOM-EE-CCSD), EOM-CCSD with spin-flip (EOM-SF-CCSD), EOM-CCSD for ionization potentials (EOM-IP-CCSD) and electron attachment (EOM-EA-CCSD). We employ a perturbative approach in which the SOCs are computed as matrix elements of the respective part of the Breit-Pauli Hamiltonian using zeroth-order non-relativistic wave functions. We follow the expectation-value approach rather than the response-theory formulation for property calculations. Both the full two-electron treatment and the mean-field approximation (a partial account of the two-electron contributions) have been implemented and benchmarked using several small molecules containing elements up to the fourth row of the periodic table. The benchmark results show the excellent performance of the perturbative treatment and the mean-field approximation. When used with an appropriate basis set, the errors with respect to experiment are below 5% for the considered examples. The findings regarding basis-set requirements are in agreement with previous studies. The impact of different correlation treatment in zeroth-order wave functions is analyzed. Overall, the EOM-IP-CCSD, EOM-EA-CCSD, EOM-EE-CCSD, and EOM-SF-CCSD wave functions yield SOCs that agree well with each other (and with the experimental values when available). Using an EOM-CCSD approach that provides a more balanced description of the target states yields more accurate results.
Epifanovsky, Evgeny; Klein, Kerstin; Stopkowicz, Stella; Gauss, Jürgen; Krylov, Anna I.
2015-08-01
We present a formalism and an implementation for calculating spin-orbit couplings (SOCs) within the EOM-CCSD (equation-of-motion coupled-cluster with single and double substitutions) approach. The following variants of EOM-CCSD are considered: EOM-CCSD for excitation energies (EOM-EE-CCSD), EOM-CCSD with spin-flip (EOM-SF-CCSD), EOM-CCSD for ionization potentials (EOM-IP-CCSD) and electron attachment (EOM-EA-CCSD). We employ a perturbative approach in which the SOCs are computed as matrix elements of the respective part of the Breit-Pauli Hamiltonian using zeroth-order non-relativistic wave functions. We follow the expectation-value approach rather than the response-theory formulation for property calculations. Both the full two-electron treatment and the mean-field approximation (a partial account of the two-electron contributions) have been implemented and benchmarked using several small molecules containing elements up to the fourth row of the periodic table. The benchmark results show the excellent performance of the perturbative treatment and the mean-field approximation. When used with an appropriate basis set, the errors with respect to experiment are below 5% for the considered examples. The findings regarding basis-set requirements are in agreement with previous studies. The impact of different correlation treatment in zeroth-order wave functions is analyzed. Overall, the EOM-IP-CCSD, EOM-EA-CCSD, EOM-EE-CCSD, and EOM-SF-CCSD wave functions yield SOCs that agree well with each other (and with the experimental values when available). Using an EOM-CCSD approach that provides a more balanced description of the target states yields more accurate results.
International Nuclear Information System (INIS)
We present a formalism and an implementation for calculating spin-orbit couplings (SOCs) within the EOM-CCSD (equation-of-motion coupled-cluster with single and double substitutions) approach. The following variants of EOM-CCSD are considered: EOM-CCSD for excitation energies (EOM-EE-CCSD), EOM-CCSD with spin-flip (EOM-SF-CCSD), EOM-CCSD for ionization potentials (EOM-IP-CCSD) and electron attachment (EOM-EA-CCSD). We employ a perturbative approach in which the SOCs are computed as matrix elements of the respective part of the Breit-Pauli Hamiltonian using zeroth-order non-relativistic wave functions. We follow the expectation-value approach rather than the response-theory formulation for property calculations. Both the full two-electron treatment and the mean-field approximation (a partial account of the two-electron contributions) have been implemented and benchmarked using several small molecules containing elements up to the fourth row of the periodic table. The benchmark results show the excellent performance of the perturbative treatment and the mean-field approximation. When used with an appropriate basis set, the errors with respect to experiment are below 5% for the considered examples. The findings regarding basis-set requirements are in agreement with previous studies. The impact of different correlation treatment in zeroth-order wave functions is analyzed. Overall, the EOM-IP-CCSD, EOM-EA-CCSD, EOM-EE-CCSD, and EOM-SF-CCSD wave functions yield SOCs that agree well with each other (and with the experimental values when available). Using an EOM-CCSD approach that provides a more balanced description of the target states yields more accurate results
International Nuclear Information System (INIS)
In May 2010, JENDL-4.0 was released from Japan Atomic Energy Agency as the updated Japanese Nuclear Data Library. It was processed by the nuclear data processing system LICEM and an arbitrary-temperature neutron cross section library MVPlib-nJ40 was produced for the neutron and photon transport calculation code MVP based on the continuous-energy Monte Carlo method. The library contains neutron cross sections for 406 nuclides on the free gas model, thermal scattering cross sections, and cross sections of pseudo fission products for burn-up calculations with MVP. Criticality benchmark calculations were carried out with MVP and MVPlib-nJ40 for about 1,000 cases of critical experiments stored in the hand book of International Criticality Safety Benchmark Evaluation Project (ICSBEP), which covers a wide variety of fuel materials, fuel forms, and neutron spectra. We report all comparison results (C/E values) of effective neutron multiplication factors between calculations and experiments to give a validation data for the prediction accuracy of JENDL-4.0 for criticalities. (author)
500-MeV electron beam bench-mark experiments and calculations
International Nuclear Information System (INIS)
Experiments measuring the energy deposited by electron beams were performed to provide bench marks against which to evaluate our HANDYL76 electron beam computer code. The experiments, done at Stanford's Mk III accelerator, measured dose vs depth and dose vs radius profiles induced in layered aluminum targets by 500-MeV electrons. The dose was measured by passive thermoluminescence and photographic film placed between aluminum plates. The calculations predict a dose vs radius profile that forward-peaks on axis after the beam passes through a 200-cm air gap; the experimental measurements do not show this peak. This discrepancy indicates there may be a problem in using HANDYL76 to calculate deep penetration of a target with a large gap
Full CI benchmark calculations for several states of the same symmetry
Bauschlicher, Charles W., Jr.; Taylor, Peter R.
1987-01-01
Full CI (FCI) wave functions are used to compute energies for several electronic states of the same symmetry for SiH2, CH2, and CH2(+). It is found that CASSCF/multireference CI wave functions yield results very similar to FCI, irrespective of whether the CASSCF MOs are optimized independently for each state or using an average of the CASSCF energies for all desired states. The ionization potentials and excitation energies obtained from the FCI calculations should help calibrate methods (such as Green's function approaches, equations of motion and propagator methods, and cluster expansions) in which energy differences are computed directly.
DEFF Research Database (Denmark)
Cai, Xiao-Xiao; Llamas-Jansa, Isabel; Mullet, Steven;
2013-01-01
Geant4 is an open source general purpose simulation toolkit for particle transportation in matter. Since the extension of the thermal scattering model in Geant4.9.5 and the availability of the IAEA HP model cross section libraries, it is now possible to extend the application area of Geant4 to re...
Sihver, L.; Mancusi, D.; Niita, K.; Sato, T.; Townsend, L.; Farmer, C.; Pinsky, L.; Ferrari, A.; Cerutti, F.; Gomes, I.
Particles and heavy ions are used in various fields of nuclear physics, medical physics, and material science, and their interactions with different media, including human tissue and critical organs, have therefore carefully been investigated both experimentally and theoretically since the 1930s. However, heavy-ion transport includes many complex processes and measurements for all possible systems, including critical organs, would be impractical or too expensive; e.g. direct measurements of dose equivalents to critical organs in humans cannot be performed. A reliable and accurate particle and heavy-ion transport code is therefore an essential tool in the design study of accelerator facilities as well as for other various applications. Recently, new applications have also arisen within transmutation and reactor science, space and medicine, especially radiotherapy, and several accelerator facilities are operating or planned for construction. Accurate knowledge of the physics of interaction of particles and heavy ions is also necessary for estimating radiation damage to equipment used on space vehicles, to calculate the transport of the heavy ions in the galactic cosmic ray (GCR) through the interstellar medium, and the evolution of the heavier elements after the Big Bang. Concerns about the biological effect of space radiation and space dosimetry are increasing rapidly due to the perspective of long-duration astronaut missions, both in relation to the International Space Station and to manned interplanetary missions in near future. Radiation protection studies for crews of international flights at high altitude have also received considerable attention in recent years. There is therefore a need to develop accurate and reliable particle and heavy-ion transport codes. To be able to calculate complex geometries, including production and transport of protons, neutrons, and alpha particles, 3-dimensional transport using Monte Carlo (MC) technique must be used. Today
Moraitis, K; Georgoulis, M K; Archontis, V
2014-01-01
In earlier works we introduced and tested a nonlinear force-free (NLFF) method designed to self-consistently calculate the free magnetic energy and the relative magnetic helicity budgets of the corona of observed solar magnetic structures. The method requires, in principle, only a single, photospheric or low-chromospheric, vector magnetogram of a quiet-Sun patch or an active region and performs calculations in the absence of three-dimensional magnetic and velocity-field information. In this work we strictly validate this method using three-dimensional coronal magnetic fields. Benchmarking employs both synthetic, three-dimensional magnetohydrodynamic simulations and nonlinear force-free field extrapolations of the active-region solar corona. We find that our time-efficient NLFF method provides budgets that differ from those of more demanding semi-analytical methods by a factor of ~3, at most. This difference is expected from the physical concept and the construction of the method. Temporal correlations show mo...
A proposal for a new U-D2O criticality benchmark: RB reactor core 39/1978
Directory of Open Access Journals (Sweden)
Pešić Milan P.
2012-01-01
Full Text Available In 1958, the experimental RB reactor was designed as a heavy water critical assembly with natural uranium metal rods. It was the first nuclear fission critical facility at the Boris Kidrič (now Vinča Institute of Nuclear Sciences in the former Yugoslavia. The first non-reflected, unshielded core was assembled in an aluminium tank, at a distance of around 4 m from all adjacent surfaces, so as to achieve as low as possible neutron back reflection to the core. The 2% enriched uranium metal and 80% enriched uranium dioxide (dispersed in aluminum fuel elements (known as slugs were obtained from the USSR in 1960 and 1976, respectively. The so-called “clean” cores of the RB reactor were assembled from a single type of fuel elements. The “mixed” cores of the RB reactor, assembled from two or three types of different fuel elements, were also positioned in heavy water. Both types of cores can be composed as square lattices with different pitches, covering a range of 7 cm to 24 cm. A radial heavy water reflector of various thicknesses usually surrounds the cores. Up to 2006, four sets of clean cores (44 core configurations have been accepted as criticality benchmarks and included into the OECD ICSBEP Handbook. The RB mixed core 39/1978 was made of 31 natural uranium metal rods positioned in heavy water, in a lattice with a pitch of 8√2 cm and 78
International Nuclear Information System (INIS)
In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure vessel design with 451 vertical coolant channels and the fuel assemblies (FA) are clusters of 37 natural UO2 rods with an active length of 530 cm. For the reactor physics area, a revision and update of reactor physics calculation methods and models was recently carried out covering cell, supercell (control rod) and core calculations. This paper presents benchmark comparisons of core parameters of a slightly idealized model of the Atucha-I core obtained with the PUMA reactor code with MCNP5. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, more symmetric than Atucha-II, and has some experimental data available. To validate the new models benchmark comparisons of k-effective, channel power and axial power distributions obtained with PUMA and MCNP5 have been performed. In addition, a simple cell heterogeneity correction recently introduced in PUMA is presented, which improves significantly the agreement of calculated channel powers with MCNP5. To complete the validation, the calculation of some of the critical configurations of the Atucha-I reactor measured during the experiments performed at first criticality is also presented. (authors)
Energy Technology Data Exchange (ETDEWEB)
Bessette, Gregory Carl
2004-09-01
Modeling the response of buried reinforced concrete structures subjected to close-in detonations of conventional high explosives poses a challenge for a number of reasons. Foremost, there is the potential for coupled interaction between the blast and structure. Coupling enters the problem whenever the structure deformation affects the stress state in the neighboring soil, which in turn, affects the loading on the structure. Additional challenges for numerical modeling include handling disparate degrees of material deformation encountered in the structure and surrounding soil, modeling the structure details (e.g., modeling the concrete with embedded reinforcement, jointed connections, etc.), providing adequate mesh resolution, and characterizing the soil response under blast loading. There are numerous numerical approaches for modeling this class of problem (e.g., coupled finite element/smooth particle hydrodynamics, arbitrary Lagrange-Eulerian methods, etc.). The focus of this work will be the use of a coupled Euler-Lagrange (CEL) solution approach. In particular, the development and application of a CEL capability within the Zapotec code is described. Zapotec links two production codes, CTH and Pronto3D. CTH, an Eulerian shock physics code, performs the Eulerian portion of the calculation, while Pronto3D, an explicit finite element code, performs the Lagrangian portion. The two codes are run concurrently with the appropriate portions of a problem solved on their respective computational domains. Zapotec handles the coupling between the two domains. The application of the CEL methodology within Zapotec for modeling coupled blast/structure interaction will be investigated by a series of benchmark calculations. These benchmarks rely on data from the Conventional Weapons Effects Backfill (CONWEB) test series. In these tests, a 15.4-lb pipe-encased C-4 charge was detonated in soil at a 5-foot standoff from a buried test structure. The test structure was composed of a
International Nuclear Information System (INIS)
Monte Carlo N-Particle Transport Code System (MCNP) criticality calculations were performed on a library of critical benchmark experiments to obtain preliminary bias values and subcritical margins to be utilized in licensing calculations for high-level radioactive waste disposal. The critical experiments library includes a broad range of system physical and neutronic characteristics that are representative of a range of potential criticality configurations relevant to long-term deep geological disposal. Two hundred and eighty-nine critical benchmark experiments were selected and grouped into 20 critical experiment classifications. From the results of this study, an applicable subcritical margin or maximum allowable keff can be selected for preliminary repository criticality analysis based on the similarity between the physical and neutronic characteristics of the system being analyzed and the relevant library classification. The results of this study provide quantification of both the confidence associated with the MCNP code and the presented conservative method for performing criticality evaluations relevant to repository emplacement of high-level radioactive waste
Five radionuclide vadose zone models with different degrees of complexity (CHAIN, MULTIMED_DP, FECTUZ, HYDRUS, and CHAIN 2D) were selected for use in soil screening level (SSL) calculations. A benchmarking analysis between the models was conducted for a radionuclide (99Tc) rele...
International Nuclear Information System (INIS)
The purpose of this work is to validate MCNP5 libraries by simulating 4 detailed benchmark experiments and comparing MCNP5 results (each library) with the experimental results and also the previously validated codes for the same experiments MORET 4.A coupled with APOLLO2 (France), and MONK8 (UK). The reasons for difference between libraries are also investigated in this work. Investigating the reason for the differences between libraries will be done by specifying a different library for specific part (clad, fuel, light water) and checking the result deviation than the previously calculated result (with all parts of the same library). The investigated benchmark experiments are of single fuel rods arrays that are water-moderated and water-reflected. Rods contained low-enriched (4.738 wt.% 92235U)uranium dioxide (UO2) fuel were clad with aluminum alloy AGS. These experiments were subcritical approaches extrapolated to critical, with the multiplication factor reached being very close to 1.000 (within 0.1%); the subcritical approach parameter was the water level. The studied four cases differ from each other in pitch, number of fuel rods and of course critical height of water. The results show that although library ENDF/B-IV lacks light water treatment card, however its results can be reliable as light water treatment library does not have significant differences from library to another, so it will not be necessary to specify light water treatment card. The main reason for differences between ENDF/B-V and ENDF/B-VI is light water material, especially the Hydrogen element. Specifying the library of Uranium is necessary in case of using library ENDF/B-IV. On the other hand it is not necessary to specify library of cladding material whatever the used library. Validated libraries are ENDF/BIV, ENDF/B-V and ENDF/B-VI with codes in MCNP 42C, 50C and 60C respectively. The presentation slides have been added to the article
Energy Technology Data Exchange (ETDEWEB)
Kozier, K. S.; Roubtsov, D. [AECL, Chalk River Laboratories, Chalk River, ON (Canada); Plompen, A. J. M.; Kopecky, S. [EC-JRC, Inst. for Reference Materials and Measurements, Retieseweg 111, 2440 Geel (Belgium)
2012-07-01
The thermal neutron-elastic-scattering cross-section data for {sup 16}O used in various modern evaluated-nuclear-data libraries were reviewed and found to be generally too high compared with the best available experimental measurements. Some of the proposed revisions to the ENDF/B-VII.0 {sup 16}O data library and recent results from the TENDL system increase this discrepancy further. The reactivity impact of revising the {sup 16}O data downward to be consistent with the best measurements was tested using the JENDL-3.3 {sup 16}O cross-section values and was found to be very small in MCNP5 simulations of the UO{sub 2} and reactor-recycle MOX-fuel cases of the ANS Doppler-defect numerical benchmark. However, large reactivity differences of up to about 14 mk (1400 pcm) were observed using {sup 16}O data files from several evaluated-nuclear-data libraries in MCNP5 simulations of the Los Alamos National Laboratory HEU heavy-water solution thermal critical experiments, which were performed in the 1950's. The latter result suggests that new measurements using HEU in a heavy-water-moderated critical facility, such as the ZED-2 zero-power reactor at the Chalk River Laboratories, might help to resolve the discrepancy between the {sup 16}O thermal elastic-scattering cross-section values and thereby reduce or better define its uncertainty, although additional assessment work would be needed to confirm this. (authors)
Fang, Zongtang; Both, Johan; Li, Shenggang; Yue, Shuwen; Aprà, Edoardo; Keçeli, Murat; Wagner, Albert F; Dixon, David A
2016-08-01
The heats of formation and the normalized clustering energies (NCEs) for the group 4 and group 6 transition metal oxide (TMO) trimers and tetramers have been calculated by the Feller-Peterson-Dixon (FPD) method. The heats of formation predicted by the FPD method do not differ much from those previously derived from the NCEs at the CCSD(T)/aT level except for the CrO3 nanoclusters. New and improved heats of formation for Cr3O9 and Cr4O12 were obtained using PW91 orbitals instead of Hartree-Fock (HF) orbitals. Diffuse functions are necessary to predict accurate heats of formation. The fluoride affinities (FAs) are calculated with the CCSD(T) method. The relative energies (REs) of different isomers, NCEs, electron affinities (EAs), and FAs of (MO2)n (M = Ti, Zr, Hf, n = 1-4) and (MO3)n (M = Cr, Mo, W, n = 1-3) clusters have been benchmarked with 55 exchange-correlation density functional theory (DFT) functionals including both pure and hybrid types. The absolute errors of the DFT results are mostly less than ±10 kcal/mol for the NCEs and the EAs and less than ±15 kcal/mol for the FAs. Hybrid functionals usually perform better than the pure functionals for the REs and NCEs. The performance of the two types of functionals in predicting EAs and FAs is comparable. The B1B95 and PBE1PBE functionals provide reliable energetic properties for most isomers. Long range corrected pure functionals usually give poor FAs. The standard deviation of the absolute error is always close to the mean errors, and the probability distributions of the DFT errors are often not Gaussian (normal). The breadth of the distribution of errors and the maximum probability are dependent on the energy property and the isomer. PMID:27384926
International Nuclear Information System (INIS)
MPI parallelism are implemented on a SUN Workstation for running MCNPX and on the High Performance Computing Facility (HPC) for running MCNP5. 23 input less obtained from MCNP Criticality Validation Suite are utilized for the purpose of evaluating the amount of speed up achievable by using the parallel capabilities of MPI. More importantly, we will study the economics of using more processors and the type of problem where the performance gain are obvious. This is important to enable better practices of resource sharing especially for the HPC facilities processing time. Future endeavours in this direction might even reveal clues for best MCNP5/ MCNPX coding practices for optimum performance of MPI parallelisms. (author)
Swedish analysis of NEA/CSNI benchmark problems for criticality codes
International Nuclear Information System (INIS)
The Monte Carlo methods used by the members of the working group are adequate for calculations on large arrays. The differences in the results from different codes are probably caused by the differences in cross sections. The previous difficulties in obtaining good results for bare arrays of UNH-solution are, at least to some extent, explained by incomplete information. The neutron reflection from walls, ceiling and floor has earlier been neglected. The inclusion of these in the input to the Monte Carlo codes appears to lead to adequate results. The basis for the IAEA rules of calculating allowable numbers of fissile packages mixed with other packages (fissile or not) during transport, does not seem justified. This has been demonstrated for theoretical package designs. It has not been confirmed by the other members of the working group and no conclusion was drawn by the group. It is very likely that a mix of real packages can be found that supports the mentioned theoretical demonstration. (author)
Energy Technology Data Exchange (ETDEWEB)
J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori
2007-05-01
Since ICNC 2003, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) has continued to expand its efforts and broaden its scope. Criticality-alarm / shielding type benchmarks and fundamental physics measurements that are relevant to criticality safety applications are not only included in the scope of the project, but benchmark data are also included in the latest version of the handbook. A considerable number of improvements have been made to the searchable database, DICE and the criticality-alarm / shielding benchmarks and fundamental physics measurements have been included in the database. There were 12 countries participating on the ICSBEP in 2003. That number has increased to 18 with recent contributions of data and/or resources from Brazil, Czech Republic, Poland, India, Canada, and China. South Africa, Germany, Argentina, and Australia have been invited to participate. Since ICNC 2003, the contents of the “International Handbook of Evaluated Criticality Safety Benchmark Experiments” have increased from 350 evaluations (28,000 pages) containing benchmark specifications for 3070 critical or subcritical configurations to 442 evaluations (over 38,000 pages) containing benchmark specifications for 3957 critical or subcritical configurations, 23 criticality-alarm-placement / shielding configurations with multiple dose points for each, and 20 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications in the 2006 Edition of the ICSBEP Handbook. Approximately 30 new evaluations and 250 additional configurations are expected to be added to the 2007 Edition of the Handbook. Since ICNC 2003, a reactor physics counterpart to the ICSBEP, The International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. Beginning in 1999, the IRPhEP was conducted as a pilot activity by the by the Organization of Economic Cooperation and Development (OECD) Nuclear Energy
International Nuclear Information System (INIS)
The purpose of this calculation note is to provide the basis for criticality consequences for the Tank Farm Safety Analysis Report (FSAR). Criticality scenario is developed and details and description of the analysis methods are provided
International Nuclear Information System (INIS)
In this paper, the results of the investigations on the nodalization effects for the ATHLET code are presented and discussed in details on the basis of experimental data for the VVER-1000 Coolant Transient Benchmark with different operating modes of four main coolant pumps. ATHLET calculations with different nodalization and their impact was analyzed. The work studied the influence of annular outlet nodalization on calculation of coolant temperature. By comparing the test data versus calculated by ATHLET we showed a good agreement between the experimental data and simulation results for analyzed parameters.
DEFF Research Database (Denmark)
Cismondi, Martin; Michelsen, Michael Locht
2007-01-01
A general strategy for global phase equilibrium calculations (GPEC) in binary mixtures is presented in this work along with specific methods for calculation of the different parts involved. A Newton procedure using composition, temperature and Volume as independent variables is used for calculation......, critical endpoints and three-phase lines for binary mixtures with phase diagrams of types from I to V without advance knowledge of the type of phase diagram. The procedure requires a thermodynamic model in the form of a pressure-explicit EOS but is not specific to a particular equation of state. (C) 2006...... of critical lines. Each calculated point is analysed for stability by means of the tangent plane distance, and the occurrence of an unstable point is used to determine a critical endpoint (CEP). The critical endpoint, in turn, is used as the starting point for constructing the three-phase line. The equations...
Quantum mechanical cluster calculations of critical scintillation processes
Derenzo, Stephen E.; Klintenberg, Mattias K.; Weber, Marvin J.
2000-01-01
This paper describes the use of commercial quantum chemistry codes to simu-late several critical scintillation processes. The crystal is modeled as a cluster of typically 50 atoms embedded in an array of typically 5,000 point charges designed to reproduce the electrostatic field of the infinite crystal. The Schrodinger equation is solved for the ground, ionized, and excited states of the system to determine the energy and electron wavefunction. Computational methods for the following cri...
Quantum mechanical cluster calculations of critical scintillation processes
International Nuclear Information System (INIS)
This paper describes the use of commercial quantum chemistry codes to simulate several critical scintillation processes. The crystal is modeled as a cluster of typically 50 atoms embedded in an array of typically 5,000 point charges designed to reproduce the electrostatic field of the infinite crystal. The Schrodinger equation is solved for the ground, ionized, and excited states of the system to determine the energy and electron wave function. Computational methods for the following critical processes are described: (1) the formation and diffusion of relaxed holes, (2) the formation of excitons, (3) the trapping of electrons and holes by activator atoms, (4) the excitation of activator atoms, and (5) thermal quenching. Examples include hole diffusion in CsI, the exciton in CsI, the excited state of CsI:Tl, the energy barrier for the diffusion of relaxed holes in CaF2 and PbF2, and prompt hole trapping by activator atoms in CaF2:Eu and CdS:Te leading to an ultra-fast (<50ps) scintillation rise time.
International Nuclear Information System (INIS)
Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more
Shao, Kan; Gift, Jeffrey S; Setzer, R Woodrow
2013-11-01
Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose-response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean±standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the "hybrid" method and relative deviation approach, we first evaluate six representative continuous dose-response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates.
Critical evaluation of German regulatory specifications for calculating radiological exposure
Energy Technology Data Exchange (ETDEWEB)
Koenig, Claudia; Walther, Clemens [Hannover Univ. (Germany). Inst. of Radioecology; Smeddinck, Ulrich [Technische Univ. Braunschweig (Germany). Inst. of Law
2015-07-01
The assessment of radiological exposure of the public is an issue at the interface between scientific findings, juridical standard setting and political decision. The present work revisits the German regulatory specifications for calculating radiological exposure, like the already existing calculation model General Administrative Provision (AVV) for planning and monitoring nuclear facilities. We address the calculation models for the recent risk assessment regarding the final disposal of radioactive waste in Germany. To do so, a two-pronged approach is pursued. One part deals with radiological examinations of the groundwater-soil-transfer path of radionuclides into the biosphere. Processes at the so-called geosphere-biosphere-interface are examined, especially migration of I-129 in the unsaturated zone. This is necessary, since the German General Administrative Provision does not consider radionuclide transport via groundwater from an underground disposal facility yet. Especially data with regard to processes in the vadose zone are scarce. Therefore, using I-125 as a tracer, immobilization and mobilization of iodine is investigated in two reference soils from the German Federal Environment Agency. The second part of this study examines how scientific findings but also measures and activities of stakeholders and concerned parties influence juridical standard setting, which is necessary for risk management. Risk assessment, which is a scientific task, includes identification and investigation of relevant sources of radiation, possible pathways to humans, and maximum extent and duration of exposure based on dose-response functions. Risk characterization identifies probability and severity of health effects. These findings have to be communicated to authorities, who have to deal with the risk management. Risk management includes, for instance, taking into account acceptability of the risk, actions to reduce, mitigate, substitute or monitor the hazard, the setting of
Critical evaluation of German regulatory specifications for calculating radiological exposure
International Nuclear Information System (INIS)
The assessment of radiological exposure of the public is an issue at the interface between scientific findings, juridical standard setting and political decision. The present work revisits the German regulatory specifications for calculating radiological exposure, like the already existing calculation model General Administrative Provision (AVV) for planning and monitoring nuclear facilities. We address the calculation models for the recent risk assessment regarding the final disposal of radioactive waste in Germany. To do so, a two-pronged approach is pursued. One part deals with radiological examinations of the groundwater-soil-transfer path of radionuclides into the biosphere. Processes at the so-called geosphere-biosphere-interface are examined, especially migration of I-129 in the unsaturated zone. This is necessary, since the German General Administrative Provision does not consider radionuclide transport via groundwater from an underground disposal facility yet. Especially data with regard to processes in the vadose zone are scarce. Therefore, using I-125 as a tracer, immobilization and mobilization of iodine is investigated in two reference soils from the German Federal Environment Agency. The second part of this study examines how scientific findings but also measures and activities of stakeholders and concerned parties influence juridical standard setting, which is necessary for risk management. Risk assessment, which is a scientific task, includes identification and investigation of relevant sources of radiation, possible pathways to humans, and maximum extent and duration of exposure based on dose-response functions. Risk characterization identifies probability and severity of health effects. These findings have to be communicated to authorities, who have to deal with the risk management. Risk management includes, for instance, taking into account acceptability of the risk, actions to reduce, mitigate, substitute or monitor the hazard, the setting of
Directory of Open Access Journals (Sweden)
Maria Avramova
2013-01-01
Full Text Available Over the last few years, the Pennsylvania State University (PSU under the sponsorship of the US Nuclear Regulatory Commission (NRC has prepared, organized, conducted, and summarized two international benchmarks based on the NUPEC data—the OECD/NRC Full-Size Fine-Mesh Bundle Test (BFBT Benchmark and the OECD/NRC PWR Sub-Channel and Bundle Test (PSBT Benchmark. The benchmarks’ activities have been conducted in cooperation with the Nuclear Energy Agency/Organization for Economic Co-operation and Development (NEA/OECD and the Japan Nuclear Energy Safety (JNES Organization. This paper presents an application of the joint Penn State University/Technical University of Madrid (UPM version of the well-known sub-channel code COBRA-TF (Coolant Boiling in Rod Array-Two Fluid, namely, CTF, to the steady state critical power and departure from nucleate boiling (DNB exercises of the OECD/NRC BFBT and PSBT benchmarks. The goal is two-fold: firstly, to assess these models and to examine their strengths and weaknesses; and secondly, to identify the areas for improvement.
MARS-GCR/CAPP Coupled Multi-Physics Calculation for the OECD/NEA PBMR-400 Benchmark Problem
Energy Technology Data Exchange (ETDEWEB)
Lee, Hyuh Chul; Lee, Seung Wook; Noh, Jae Man; Lee, Won Jae [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2008-05-15
The OECD/NEA PBMR-400 neutronics/thermal hydraulics coupled benchmark problem was proposed to test the existing analysis methods for high temperature gas-cooled reactors (HTGRs) and to develop more accurate and efficient tools to analyze the neutronics and thermal-hydraulics (TH) behavior for the design and safety evaluations of the PBMR. Three cases are defined for the steady state phase (Phase I) of the benchmark. The first case of the steady state phase (SS-1) is a neutronics stand-alone case with fixed cross-sections while the second case of the steady state phase (SS-2) is a TH stand-alone case with fixed heat source. The third case of the steady state phase (SS-3) is a TH/neutronics coupled case, which is the initial state of the TH/neutronics coupled cases defined in the transient state phase (Phase II). Six cases are defined for phase II of the benchmark. They are depressurized loss of forced cooling (DLOFC) without SCRAM (TR-1), DLOFC with SCRAM (TR-2), pressurized loss of forced cooling (PLOFC) with SCRAM(TR-3), load follow (TR-4), reactivity insertion by control rod withdrawal (CRW) and control rod ejection (CRE) (TR-5), and cold helium inlet (TR-6). The final results for the SS-1 and SS-2 have been reported and the preliminary results for SS-3, TR-5a, TR-5b and TR-6 have also been reported in our previous work. In this paper, we present our final results for SS-3, TR-3, TR-5a, TR-5b, and TR-6 of the benchmark problem and they are compared with those of other participants.
International Nuclear Information System (INIS)
It was determined that the criticality hazard associated with the Slagging Pyrolysis Incinerator (SPI) Facility would be minimal if a three-level criticality-hazard prevention program were implemented. The first strategy consists of screening all incoming wastes for fissile content. The second prevention level is provided by introducing a small concentration of a neutron-absorbing compound, such as B2O3, into the input waste stream. The third prevention level is provided by direct criticality-hazard monitoring using sensitive neutron detectors in all regions of the facility where a significant hazard has been identified - principally the drying, pyrolysis, and slag regions. The facility could be shut down rapidly for cleanout if the measurements indicate an unsafe condition is developing. The criticality safety provided by the product of these three independent measures should reduce the hazard to a negligible level
Multi-Loop Calculations of Anomalous Exponents in the Models of Critical Dynamics
Directory of Open Access Journals (Sweden)
Adzhemyan L. Ts.
2016-01-01
Full Text Available The Renormalization group method (RG is applied to the investigation of the E model of critical dynamics, which describes the transition from the normal to the superfluid phase in He4. The technique “Sector decomposition” with R’ operation is used for the calculation of the Feynman diagrams. The RG functions, critical exponents and critical dynamical exponent z, which determines the growth of the relaxation time near the critical point, have been calculated in the two-loop approximation in the framework of ε-expansion. The relevance of a fixed point for helium, where the dynamic scaling is weakly violated, is briefly discussed.
The Establishment,Calculation and Application of Benchmarking Housing Price System%基准房价体系的构建、测算及应用
Institute of Scientific and Technical Information of China (English)
李妍; 汪友结
2013-01-01
In order to improve the sufficiency of benchmarking housing price in housing market,we have developed the concept of benchmarking housing price from district to unit price and identified its connotation. As a result,we have established a multi-level benchmarking housing price system,unit to building to community to district. In this context,we have built a practical calculation model of benchmarking housing price by utilizing the mass appraisal and full coverage of statistical sampling techniques. Moreover, GIS technology is introduced to price calculation and to build the application platform. Therefore,this has provided a theoretical framework and practical experiences of benchmarking housing price system to other cities in China for further references.% 为有效改变当前我国房地产价格种类繁多但缺乏权威性基准价格的现状，在科学借鉴国内外先进实践经验的基础上，将“基准房价”概念从“片区价”升华至“一房一价”并明确界定其基本内涵，提出了“房屋基准价→楼栋基准价→楼盘基准价→片区基准价”的多层次基准房价体系；通过引入批量评估技术和全样本统计技术，构建真正意义上的基准房价测算模型--整体估价模型和全样本统计模型；深入探索并全面拓展基准房价体系的应用方法及应用领域，并运用 GIS 技术设计完成基准房价测算与应用平台，从而为我国各城市构建基准房价体系提供了可供借鉴的理论框架与实践参考。
Energy Technology Data Exchange (ETDEWEB)
Mielke, Steven L.; Schwenke, David; Peterson, Kirk A.
2005-06-08
We present a detailed ab initio study of the effect that the Born–Oppenheimer diagonal correction (BODC) has on the saddle point properties of the H3 system and its isotopomers. Benchmark values are presented that are estimated to be within 0.1 cm–1 of the complete configuration interaction limit. We consider the basis set and correlation treatment requirements for accurate BODC calculations, and both are observed to be more favorable than for the Born–Oppenheimer energies. The BODC raises the H + H2 barrier height by 0.1532 kcal/mol and slightly narrows the barrier—with the imaginary frequency increasing by ~2%.
Influence of Metal Shells around Fuel Assemblies on Criticality Calculations for a Fuel Storage Pool
Babichev, L.; Khmialeuski, A.
2012-01-01
Influence of metal shells and size of cells in a fuel storage pool on the value of the effective neutron multiplication factor was studied. Monte Carlo code MCU-FREE was used in the criticality calculations. A criticality analysis of spent fuel storage pools for different degrees of packing of cells in the rack was performed.
Energy Technology Data Exchange (ETDEWEB)
Uddin, M.N. [Department of Physics, Jahangirnagar University, Dhaka (Bangladesh); Sarker, M.M. [Reactor Physics and Engineering Division, Institute of Nuclear Science and Technology, Atomic Energy Research Establishment, Savar, GPO Box 3787, Dhaka 1000 (Bangladesh); Khan, M.J.H. [Reactor Physics and Engineering Division, Institute of Nuclear Science and Technology, Atomic Energy Research Establishment, Savar, GPO Box 3787, Dhaka 1000 (Bangladesh)], E-mail: jahirulkhan@yahoo.com; Islam, S.M.A. [Department of Physics, Jahangirnagar University, Dhaka (Bangladesh)
2009-10-15
The aim of this paper is to present the validation of evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 through the analysis of the integral parameters of TRX and BAPL benchmark lattices of thermal reactors for neutronics analysis of TRIGA Mark-II Research Reactor at AERE, Bangladesh. In this process, the 69-group cross-section library for lattice code WIMS was generated using the basic evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 with the help of nuclear data processing code NJOY99.0. Integral measurements on the thermal reactor lattices TRX-1, TRX-2, BAPL-UO{sub 2}-1, BAPL-UO{sub 2}-2 and BAPL-UO{sub 2}-3 served as standard benchmarks for testing nuclear data files and have also been selected for this analysis. The integral parameters of the said lattices were calculated using the lattice transport code WIMSD-5B based on the generated 69-group cross-section library. The calculated integral parameters were compared to the measured values as well as the results of Monte Carlo Code MCNP. It was found that in most cases, the values of integral parameters show a good agreement with the experiment and MCNP results. Besides, the group constants in WIMS format for the isotopes U-235 and U-238 between two data files have been compared using WIMS library utility code WILLIE and it was found that the group constants are identical with very insignificant difference. Therefore, this analysis reflects the validation of evaluated nuclear data files CENDL-2.2 and JEFF-3.1.1 through benchmarking the integral parameters of TRX and BAPL lattices and can also be essential to implement further neutronic analysis of TRIGA Mark-II research reactor at AERE, Dhaka, Bangladesh.
Evaluation of approaches to calculate critical metal loads for forest soils
Vries, de W.; Groenenberg, J.E.
2009-01-01
This paper evaluates approaches to calculate acceptable loads for metal deposition to forest ecosystems, distinguishing between critical loads, stand-still loads and target loads. We also evaluated the influence of including the biochemical metal cycle on the calculated loads. Differences are illust
International Nuclear Information System (INIS)
Highlights: ► To validate the SRAC2006 code system for TRIGA neutronics calculations. ► TRX and BAPL lattices are treated as standard benchmarks for this purpose. ► To compare the calculated results with experiment as well as MCNP values in this study. ► The study demonstrates a good agreement with the experiment and the MCNP results. ► Thus, this analysis reflects the validation study of the SRAC2006 code system. - Abstract: The goal of this study is to present the validation study of the SRAC2006 code system based on evaluated nuclear data libraries ENDF/B-VII.0 and JENDL-3.3 for neutronics analysis of TRIGA Mark-II Research Reactor at AERE, Bangladesh. This study is achieved through the analysis of integral parameters of TRX and BAPL benchmark lattices of thermal reactors. In integral measurements, the thermal reactor lattices TRX-1, TRX-2, BAPL-UO2-1, BAPL-UO2-2 and BAPL-UO2-3 are treated as standard benchmarks for validating/testing the SRAC2006 code system as well as nuclear data libraries. The integral parameters of the said lattices are calculated using the collision probability transport code PIJ of the SRAC2006 code system at room temperature 20 °C based on the above libraries. The calculated integral parameters are compared to the measured values as well as the MCNP values based on the Chinese evaluated nuclear data library CENDL-3.0. It was found that in most cases, the values of integral parameters demonstrate a good agreement with the experiment and the MCNP results. In addition, the group constants in SRAC format for TRX and BAPL lattices in fast and thermal energy range respectively are compared between the above libraries and it was found that the group constants are identical with very insignificant difference. Therefore, this analysis reflects the validation study of the SRAC2006 code system based on evaluated nuclear data libraries JENDL-3.3 and ENDF/B-VII.0 and can also be essential to implement further neutronics calculations of
Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program
International Nuclear Information System (INIS)
Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the 235U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments
Simple calculation of the critical mass for highly enriched uranium and plutonium-239
Chyba, Christopher F.; Milne, Caroline R.
2014-10-01
The correct calculation of and values for the critical mass of uranium or plutonium necessary for a nuclear fission weapon have long been understood and publicly available. The calculation requires solving the radial component in spherical coordinates of a diffusion equation with a source term, so is beyond the reach of most public policy and many first-year college physics students. Yet it is important for the basic physical ideas behind the calculation to be understood by those without calculus who are nonetheless interested in international security, arms control, or nuclear non-proliferation. This article estimates the critical mass in an intuitive way that requires only algebra.
Directory of Open Access Journals (Sweden)
S. Mattedi
2000-12-01
Full Text Available A modified form of the Hicks and Young algorithm was used with the Mattedi-Tavares-Castier lattice equation of state (MTC lattice EOS to calculate critical points of binary mixtures that exhibit several types of critical behavior. Several qualitative aspects of the critical curves, such as maxima and minima in critical pressure, and minima in critical temperature, could be predicted using the MTC lattice EOS. These results were in agreement with experimental information available in the literature, illustrating the flexibility of the functional form of the MTC lattice EOS. We observed however that the MTC lattice EOS failed to predict maxima in pressure for two of the studied systems: ethane + ethanol and methane + n-hexane. We also observed that the agreement between the calculated and experimental critical properties was at most semi-quantitative in some examples. Despite these limitations, in many ways similar to those of other EOS in common use when applied to critical point calculations, we can conclude that the MTC lattice EOS has the ability to predict several types of critical curves of complex shape.
Mohammadi, A; Hassanzadeh, M; Gharib, M
2016-02-01
In this study, shielding calculation and criticality safety analysis were carried out for general material testing reactor (MTR) research reactors interim storage and relevant transportation cask. During these processes, three major terms were considered: source term, shielding, and criticality calculations. The Monte Carlo transport code MCNP5 was used for shielding calculation and criticality safety analysis and ORIGEN2.1 code for source term calculation. According to the results obtained, a cylindrical cask with body, top, and bottom thicknesses of 18, 13, and 13 cm, respectively, was accepted as the dual-purpose cask. Furthermore, it is shown that the total dose rates are below the normal transport criteria that meet the standards specified. PMID:26720262
International Nuclear Information System (INIS)
All of the three exercises of the Organization for Economic Cooperation and Development/Nuclear Regulatory Commission pressurized water reactor main steam line break (PWR MSLB) benchmark were calculated at VTT, the Technical Research Centre of Finland. For the first exercise, the plant simulation with point-kinetic neutronics, the thermal-hydraulics code SMABRE was used. The second exercise was calculated with the three-dimensional reactor dynamics code TRAB-3D, and the third exercise with the combination TRAB-3D/SMABRE. VTT has over ten years' experience of coupling neutronic and thermal-hydraulic codes, but this benchmark was the first time these two codes, both developed at VTT, were coupled together. The coupled code system is fast and efficient; the total computation time of the 100-s transient in the third exercise was 16 min on a modern UNIX workstation. The results of all the exercises are similar to those of the other participants. In order to demonstrate the effect of secondary circuit modeling on the results, three different cases were calculated. In case 1 there is no phase separation in the steam lines and no flow reversal in the aspirator. In case 2 the flow reversal in the aspirator is allowed, but there is no phase separation in the steam lines. Finally, in case 3 the drift-flux model is used for the phase separation in the steam lines, but the aspirator flow reversal is not allowed. With these two modeling variations, it is possible to cover a remarkably broad range of results. The maximum power level reached after the reactor trip varies from 534 to 904 MW, the range of the time of the power maximum being close to 30 s. Compared to the total calculated transient time of 100 s, the effect of the secondary side modeling is extremely important
Directory of Open Access Journals (Sweden)
Wonkyeong Kim
2015-01-01
Full Text Available A high-leakage core has been known to be a challenging problem not only for a two-step homogenization approach but also for a direct heterogeneous approach. In this paper the DIMPLE S06 core, which is a small high-leakage core, has been analyzed by a direct heterogeneous modeling approach and by a two-step homogenization modeling approach, using contemporary code systems developed for reactor core analysis. The focus of this work is a comprehensive comparative analysis of the conventional approaches and codes with a small core design, DIMPLE S06 critical experiment. The calculation procedure for the two approaches is explicitly presented in this paper. Comprehensive comparative analysis is performed by neutronics parameters: multiplication factor and assembly power distribution. Comparison of two-group homogenized cross sections from each lattice physics codes shows that the generated transport cross section has significant difference according to the transport approximation to treat anisotropic scattering effect. The necessity of the ADF to correct the discontinuity at the assembly interfaces is clearly presented by the flux distributions and the result of two-step approach. Finally, the two approaches show consistent results for all codes, while the comparison with the reference generated by MCNP shows significant error except for another Monte Carlo code, SERPENT2.
Energy Technology Data Exchange (ETDEWEB)
Ohta, Masayuki, E-mail: ohta.masayuki@jaea.go.jp [Japan Atomic Energy Agency, Tokai-mura, Naka-gun, Ibaraki-ken 319-1195 (Japan); Takakura, Kosuke; Ochiai, Kentaro; Sato, Satoshi; Konno, Chikara [Japan Atomic Energy Agency, Tokai-mura, Naka-gun, Ibaraki-ken 319-1195 (Japan)
2013-10-15
In order to examine a basic performance of the TRIPOLI code, two types of analyses were carried out with TRIPOLI-4.4 and MCNP5-1.40; one is a simple model calculation and the other is an analysis of iron fusion neutronics experiments with DT neutrons at the Fusion Neutronics Source (FNS) facility in Japan Atomic Energy Agency (JAEA). In the simple model calculation, we adopted a sphere of 0.5 m in radius with a 20 MeV neutron source in the center and calculated leakage neutron spectra from the sphere. We also analyzed in situ and Time-of-Flight (TOF) experiments for iron at JAEA/FNS. For the in situ experiment, neutron spectra and reaction rates for dosimetry reactions were calculated for several points inside the assembly. For the TOF experiment, angular neutron leakage spectra from the assembly were calculated. Results with TRIPOLI were comparable to those with MCNP in most calculations, but a difference between TRIPOLI and MCNP calculation results, probably caused by inadequate treatment of inelastic scattering data in TRIPOLI, appears in some calculations.
International Nuclear Information System (INIS)
The calculations performed for the Almaraz Unit 2 PWR using the code packages of the Atomic Energy Corporation of South Africa Ltd. are summarized. These calculations were done as part of the IAEA Coordinated Research Programme on In-Core Fuel Management Code Package Validation for LWRs. A brief description of the one-dimensional cross section generation package as well as of the Level II (scoping type) global core calculational package which was used is given. Detailed results are presented in several appendices. 29 figs., 20 tabs., 10 refs
Development of common user data model for APOLLO3 and MARBLE and application to benchmark problems
International Nuclear Information System (INIS)
A Common User Data Model, CUDM, has been developed for the purpose of benchmark calculations between APOLLO3 and MARBLE code systems. The current version of CUDM was designed for core calculation benchmark problems with 3-dimensional Cartesian, 3-D XYZ, geometry. CUDM is able to manage all input/output data such as 3-D XYZ geometry, effective macroscopic cross section, effective multiplication factor and neutron flux. In addition, visualization tools for geometry and neutron flux were included. CUDM was designed by the object-oriented technique and implemented using Python programming language. Based on the CUDM, a prototype system for a benchmark calculation, CUDM-benchmark, was also developed. The CUDM-benchmark supports input/output data conversion for IDT solver in APOLLO3, and TRITAC and SNT solvers in MARBLE. In order to evaluate pertinence of CUDM, the CUDM-benchmark was applied to benchmark problems proposed by T. Takeda, G. Chiba and I. Zmijarevic. It was verified that the CUDM-benchmark successfully reproduced the results calculated with reference input data files, and provided consistent results among all the solvers by using one common input data defined by CUDM. In addition, a detailed benchmark calculation for Chiba benchmark was performed by using the CUDM-benchmark. Chiba benchmark is a neutron transport benchmark problem for fast criticality assembly without homogenization. This benchmark problem consists of 4 core configurations which have different sodium void regions, and each core configuration is defined by more than 5,000 fuel/material cells. In this application, it was found that the results by IDT and SNT solvers agreed well with the reference results by Monte-Carlo code. In addition, model effects such as quadrature set effect, Sn order effect and mesh size effect were systematically evaluated and summarized in this report. (author)
VVER-related burnup credit calculations
International Nuclear Information System (INIS)
The calculations related to a VVER burnup credit calculational benchmark proposed to the Eastern and Central European research community in collaboration with the OECD/NEA/NSC Burnup Credit Criticality Benchmark Working Group (working under WPNCS - Working Party on Nuclear Criticality Safety) are described. The results of a three-year effort by analysts from the Czech Republic, Finland, Germany, Hungary, Russia, Slovakia and the United Kingdom are summarized and commented on. (author)
Turkington, M. D.; Ballance, C. P.; Hibbert, A.; Ramsbottom, C. A.
2016-08-01
In this work we explore the validity of employing a modified version of the nonrelativistic structure code civ3 for heavy, highly charged systems, using Na-like tungsten as a simple benchmark. Consequently, we present radiative and subsequent collisional atomic data compared with corresponding results from a fully relativistic structure and collisional model. Our motivation for this line of study is to benchmark civ3 against the relativistic grasp0 structure code. This is an important study as civ3 wave functions in nonrelativistic R -matrix calculations are computationally less expensive than their Dirac counterparts. There are very few existing data for the W LXIV ion in the literature with which we can compare except for an incomplete set of energy levels available from the NIST database. The overall accuracy of the present results is thus determined by the comparison between the civ3 and grasp0 structure codes alongside collisional atomic data computed by the R -matrix Breit-Pauli and Dirac codes. It is found that the electron-impact collision strengths and effective collision strengths computed by these differing methods are in good general agreement for the majority of the transitions considered, across a broad range of electron temperatures.
Energy Technology Data Exchange (ETDEWEB)
Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Celik, Cihangir [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dunn, Michael E [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wagner, John C [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); McMahan, Kimberly L [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Authier, Nicolas [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille (France); Jacquet, Xavier [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille (France); Rousseau, Guillaume [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille (France); Wolff, Herve [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille (France); Savanier, Laurence [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille (France); Baclet, Nathalie [French Atomic Energy Commission (CEA), Centre de Valduc, Is-sur-Tille (France); Lee, Yi-kang [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette (France); Trama, Jean-Christophe [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette (France); Masse, Veronique [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette (France); Gagnier, Emmanuel [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette (France); Naury, Sylvie [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette (France); Blanc-Tranchant, Patrick [French Atomic Energy Commission (CEA), Centre de Saclay, Gif sur Yvette (France); Hunter, Richard [Babcock International Group (United Kingdom); Kim, Soon [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dulik, George Michael [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, Kevin H. [Y-12 National Security Complex, Oak Ridge, TN (United States)
2015-01-01
In October 2010, a series of benchmark experiments were conducted at the French Commissariat a l'Energie Atomique et aux Energies Alternatives (CEA) Valduc SILENE facility. These experiments were a joint effort between the United States Department of Energy Nuclear Criticality Safety Program and the CEA. The purpose of these experiments was to create three benchmarks for the verification and validation of radiation transport codes and evaluated nuclear data used in the analysis of criticality accident alarm systems. This series of experiments consisted of three single-pulsed experiments with the SILENE reactor. For the first experiment, the reactor was bare (unshielded), whereas in the second and third experiments, it was shielded by lead and polyethylene, respectively. The polyethylene shield of the third experiment had a cadmium liner on its internal and external surfaces, which vertically was located near the fuel region of SILENE. During each experiment, several neutron activation foils and thermoluminescent dosimeters (TLDs) were placed around the reactor. Nearly half of the foils and TLDs had additional high-density magnetite concrete, high-density barite concrete, standard concrete, and/or BoroBond shields. CEA Saclay provided all the concrete, and the US Y-12 National Security Complex provided the BoroBond. Measurement data from the experiments were published at the 2011 International Conference on Nuclear Criticality (ICNC 2011) and the 2013 Nuclear Criticality Safety Division (NCSD 2013) topical meeting. Preliminary computational results for the first experiment were presented in the ICNC 2011 paper, which showed poor agreement between the computational results and the measured values of the foils shielded by concrete. Recently the hydrogen content, boron content, and density of these concrete shields were further investigated within the constraints of the previously available data. New computational results for the first experiment are now available
TRIGA Mark II benchmark experiment
International Nuclear Information System (INIS)
The experimental results of startup tests after reconstruction and modification of the TRIGA Mark II reactor in Ljubljana are presented. The experiments were performed with a completely fresh, compact, and uniform core. The operating conditions were well defined and controlled, so that the results can be used as a benchmark test case for TRIGA reactor calculations. Both steady-state and pulse mode operation were tested. In this paper, the following steady-state experiments are treated: critical core and excess reactivity, control rod worths, fuel element reactivity worth distribution, fuel temperature distribution, and fuel temperature reactivity coefficient
Larsson, Cecilia
2010-01-01
A few years ago Westinghouse started the development of a new method for criticality calculations for spent nuclear fuel storage pools called “PHOENIX-to–MCNP” (PHX2MCNP). PHX2MCNP transfers burn-up data from the code PHOENIX to use in MCNP in order to calculate the criticality. This thesis describes a work with the purpose to further validate the new method first by validating the software MCNP5 at higher water temperatures than room temperature and, in a second step, continue the developmen...
Mielke, Steven L.; Schwenke, David W.; Peterson, Kirk A.
2005-06-01
We present a detailed ab initio study of the effect that the Born-Oppenheimer diagonal correction (BODC) has on the saddle-point properties of the H3 system and its isotopomers. Benchmark values are presented that are estimated to be within 0.1cm-1 of the complete configuration-interaction limit. We consider the basis set and correlation treatment requirements for accurate BODC calculations, and both are observed to be more favorable than for the Born-Oppenheimer energies. The BODC raises the H+H2 barrier height by 0.1532kcal/mol and slightly narrows the barrier—with the imaginary frequency increasing by ˜2%.
Jansky, B; Turzik, Z; Kyncl, J; Cvachovec, F; Trykov, L A; Volkov, V S
2002-01-01
The neutron and gamma spectra measurements have been made for benchmark iron spherical assemblies with the diameter of 30, 50 and 100 cm. The sup 2 sup 5 sup 2 Cf neutron sources with different emissions were placed into the centre of iron spheres. In the first stage of the project, independent laboratories took part in the leakage spectra measurements. The proton recoil method was used with stilbene crystals and hydrogen proportional counters. The working range of spectrometers for neutrons is in energy range from 0.01 to 16 MeV, and for gamma from 0.40 to 12 MeV. Some adequate calculations have been carried out. The propose to carefully analyse the leakage mixed neutron and gamma spectrum from iron sphere of diameter 50 cm and then adopt that field as standard.
Calculation of the critical exponents by a renormalization of the Ornstein-Zernike equation
Zhang, Q.; Badiali, J. P.
1991-09-01
We calculate the critical exponents at the liquid-vapor critical point by using the classical ingredients of the liquid-state theory. Two coupling constants are defined at a microscopic level. The closure of the Ornstein-Zernike equation is given by the Callan-Symanzik equation from which we determine the position of the fixed point. The role of the three-body direct-correlation function is emphasized. A comparison between this work and the standard theory of critical phenomena based on the Landau-Ginzburg-Wilson Hamiltonian is presented.
Criticality calculations of the high-density spent fuel storage of the water-pool type
International Nuclear Information System (INIS)
High-density spent fuel racks increase the capacity of spent fuel pit for several times. The minimum cell pitch is limited mainly by the allowed multiplication factor of the system. Detailed criticality calculations have to be performed in order to determine the minimum allowable cell pitch. In this report are given the reactivity calculations of the spent fuel pit in dependence of cell pitch, cooling water density, boration and temperature.(author)
Assessment of formulas for calculating critical concentration by the agar diffusion method.
Drugeon, H.B.; Juvin, M E; Caillon, J.; Courtieu, A L
1987-01-01
The critical concentration of antibiotic was calculated by using the agar diffusion method with disks containing different charges of antibiotic. It is currently possible to use different calculation formulas (based on Fick's law) devised by Cooper and Woodman (the best known) and by Vesterdal. The results obtained with the formulas were compared with the MIC results (obtained by the agar dilution method). A total of 91 strains and two cephalosporins (cefotaxime and ceftriaxone) were studied....
Calculation of Henry constant on the base of critical parameters of adsorbable gas
International Nuclear Information System (INIS)
Calculation of Henry constant using correlation between critical parameters Psub(c), Tsub(c) and adsorption energy, determined by the value of internal pressure in molecular field of adsorbent, has been made. The calculated Henry constants for Ar, Kr and Xe, adsorbed by MoS2 and zeolite NaX, are compared with the experimental ones. The state of the molecules adsorbed is evaluated
Mládek, Arnošt; Krepl, Miroslav; Svozil, Daniel; Cech, Petr; Otyepka, Michal; Banáš, Pavel; Zgarbová, Marie; Jurečka, Petr; Sponer, Jiří
2013-05-21
The DNA sugar-phosphate backbone has a substantial influence on the DNA structural dynamics. Structural biology and bioinformatics studies revealed that the DNA backbone in experimental structures samples a wide range of distinct conformational substates, known as rotameric DNA backbone conformational families. Their correct description is essential for methods used to model nucleic acids and is known to be the Achilles heel of force field computations. In this study we report the benchmark database of MP2 calculations extrapolated to the complete basis set of atomic orbitals with aug-cc-pVTZ and aug-cc-pVQZ basis sets, MP2(T,Q), augmented by ΔCCSD(T)/aug-cc-pVDZ corrections. The calculations are performed in the gas phase as well as using a COSMO solvent model. This study includes a complete set of 18 established and biochemically most important families of DNA backbone conformations and several other salient conformations that we identified in experimental structures. We utilize an electronically sufficiently complete DNA sugar-phosphate-sugar (SPS) backbone model system truncated to prevent undesired intramolecular interactions. The calculations are then compared with other QM methods. The BLYP and TPSS functionals supplemented with Grimme's D3(BJ) dispersion term provide the best tradeoff between computational demands and accuracy and can be recommended for preliminary conformational searches as well as calculations on large model systems. Among the tested methods, the best agreement with the benchmark database has been obtained for the double-hybrid DSD-BLYP functional in combination with a quadruple-ζ basis set, which is, however, computationally very demanding. The new hybrid density functionals PW6B95-D3 and MPW1B95-D3 yield outstanding results and even slightly outperform the computationally more demanding PWPB95 double-hybrid functional. B3LYP-D3 is somewhat less accurate compared to the other hybrids. Extrapolated MP2(D,T) calculations are not as
Verification and validation benchmarks.
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, William Louis; Trucano, Timothy Guy
2007-02-01
Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of
CB2 result evaluation (VVER-440 burnup credit benchmark)
International Nuclear Information System (INIS)
The second portion of the four-piece international calculational benchmark on the VVER burnup credit (CB2) prepared in the collaboration with the OECD/NEA/NSC Burnup Credit Criticality Benchmarks Working Group and proposed to the AER research community has been evaluated. The evaluated results of calculations performed by analysts from Cuba, the Czech Republic, Finland, Germany, Russia, Slovakia and the United Kingdom are presented. The goal of this study is to compare isotopic concentrations calculated by the participants using various codes and libraries for depletion of the VVER-440 fuel pin cell. No measured values were available for the comparison. (author)
Weterings, Peter J J M; Loftus, Christine; Lewandowski, Thomas A
2016-08-22
Potential adverse effects of chemical substances on thyroid function are usually examined by measuring serum levels of thyroid-related hormones. Instead, recent risk assessments for thyroid-active chemicals have focussed on iodine uptake inhibition, an upstream event that by itself is not necessarily adverse. Establishing the extent of uptake inhibition that can be considered de minimis, the chosen benchmark response (BMR), is therefore critical. The BMR values selected by two international advisory bodies were 5% and 50%, a difference that had correspondingly large impacts on the estimated risks and health-based guidance values that were established. Potential treatment-related inhibition of thyroidal iodine uptake is usually determined by comparing thyroidal uptake of radioactive iodine (RAIU) during treatment with a single pre-treatment RAIU value. In the present study it is demonstrated that the physiological intra-individual variation in iodine uptake is much larger than 5%. Consequently, in-treatment RAIU values, expressed as a percentage of the pre-treatment value, have an inherent variation, that needs to be considered when conducting dose-response analyses. Based on statistical and biological considerations, a BMR of 20% is proposed for benchmark dose analysis of human thyroidal iodine uptake data, to take the inherent variation in relative RAIU data into account. Implications for the tolerated daily intakes for perchlorate and chlorate, recently established by the European Food Safety Authority (EFSA), are discussed. PMID:27268963
International Nuclear Information System (INIS)
Dielectronic satellite spectra of helium-like argon, recorded with a high-resolution X-ray crystal spectrometer at the National Spherical Torus Experiment, were found to be inconsistent with existing predictions resulting in unacceptable values for the power balance and suggesting the unlikely existence of non-Maxwellian electron energy distributions. These problems were resolved with calculations from a new atomic code. It is now possible to perform reliable electron temperature measurements and to eliminate the uncertainties associated with determinations of non-Maxwellian distributions
Energy Technology Data Exchange (ETDEWEB)
M. Bitter; M.F. Gu; L.A. Vainshtein; P. Beiersdorfer; G. Bertschinger; O. Marchuk; R. Bell; B. LeBlanc; K.W. Hill; D. Johnson; L. Roquemore
2003-08-29
Dielectronic satellite spectra of helium-like argon, recorded with a high-resolution X-ray crystal spectrometer at the National Spherical Torus Experiment, were found to be inconsistent with existing predictions resulting in unacceptable values for the power balance and suggesting the unlikely existence of non-Maxwellian electron energy distributions. These problems were resolved with calculations from a new atomic code. It is now possible to perform reliable electron temperature measurements and to eliminate the uncertainties associated with determinations of non-Maxwellian distributions.
International Nuclear Information System (INIS)
A series of criticality benchmark experiments with a small LWR-type core, reflected by 30 cm of lead, was defined jointly by SEC (Service d'Etude de Criticite), Fontenay-aux-Roses, and SRD (Safety and Reliability Directorate). These experiments are very representative of the reflecting effect of lead, since the contribution of the lead to the reactivity was assessed as about 30% in Δ K. The experiments were carried out by SRSC (Service de Recherche en Surete et Criticite), Valduc, in December 1983 in the sub-critical facility called APPARATUS B. In addition, they confirmed and measured the effect on reactivity of a water gap between the core and the lead reflector; with a water gap of less than 1 cm, the reactivity can be greater than that of the core directly reflected the lead or by over 20 cm of water. The experimental results were to a large extent made use of by SRD with the aid of the MONK Monte Carlo code and to some extent by SEC with the aid of the MORET Monte Carlo Code. All the results obtained are presented in the summary tables. These experiments allowed to compare the different libraries of cross sections available
Sakamoto, Y
2002-01-01
In the prevention of nuclear disaster, there needs the information on the dose equivalent rate distribution inside and outside the site, and energy spectra. The three dimensional radiation transport calculation code is a useful tool for the site specific detailed analysis with the consideration of facility structures. It is important in the prediction of individual doses in the future countermeasure that the reliability of the evaluation methods of dose equivalent rate distribution and energy spectra by using of Monte Carlo radiation transport calculation code, and the factors which influence the dose equivalent rate distribution outside the site are confirmed. The reliability of radiation transport calculation code and the influence factors of dose equivalent rate distribution were examined through the analyses of critical accident at JCO's uranium processing plant occurred on September 30, 1999. The radiation transport calculations including the burn-up calculations were done by using of the structural info...
International Nuclear Information System (INIS)
Validation of criticality calculations using SCALA was performed using data presented in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. This paper contains the results of statistical analysis of discrepancies between calculated and benchmark-model keff and conclusions about uncertainties of criticality prediction for different types of multiplying systems following from this analysis. (authors)
Enrico Fermi Fast Reactor Spent Nuclear Fuel Criticality Calculations: Degraded Mode
International Nuclear Information System (INIS)
The objective of this calculation is to characterize the nuclear criticality safety concerns associated with the codisposal of the Department of Energy's (DOE) Enrico Fermi (EF) Spent Nuclear Fuel (SNF) in a 5-Defense High-Level Waste (5-DHLW) Waste Package (WP) and placed in a Monitored Geologic Repository (MGR). The scope of this calculation is limited to the determination of the effective neutron multiplication factor (keff) for the degraded mode internal configurations of the codisposal WP. The results of this calculation and those of Ref. 8 will be used to evaluate criticality issues and support the analysis that will be performed to demonstrate the viability of the codisposal concept for the Monitored Geologic Repository
Energy Technology Data Exchange (ETDEWEB)
Koponen, B.L.; Hampel, V.E.
1982-10-21
This compilation contains 688 complete summaries of papers on nuclear criticality safety as presented at meetings of the American Nuclear Society (ANS). The selected papers contain criticality parameters for fissile materials derived from experiments and calculations, as well as criticality safety analyses for fissile material processing, transport, and storage. The compilation was developed as a component of the Nuclear Criticality Information System (NCIS) now under development at the Lawrence Livermore National Laboratory. The compilation is presented in two volumes: Volume 1 contains a directory to the ANS Transaction volume and page number where each summary was originally published, the author concordance, and the subject concordance derived from the keyphrases in titles. Volume 2 contains - in chronological order - the full-text summaries, reproduced here by permission of the American Nuclear Society from their Transactions, volumes 1-41.
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.
1987-01-01
Full configuration interaction (CI) calculations on the ground states of N2, NO, and O2 using a DZP Gaussian basis are compared with single-reference SDCI and coupled pair approaches (CPF), as well as with CASSCF multireference CI approaches. The CASSCF/MRCI technique is found to describe multiple bonds as well as single bonds. Although the coupled pair functional approach gave chemical accuracy (1 kcal/mol) for bonds involving hydrogen, larger errors occur in the CPF approach for the multiple bonded systems considered here. CI studies on the 1Sigma(g +) state of N2, including all single, double, triple, and quadruple excitations show that triple excitations are very important for the multiple bond case, and accounts for most of the deficiency in the coupled pair functional methods.
Energy Technology Data Exchange (ETDEWEB)
Parish, T.A. [Texas A and M Univ., College Station, TX (United States). Nuclear Engineering Dept.; Mosteller, R.D. [Los Alamos National Lab., NM (United States); Diamond, D.J. [Brookhaven National Lab., Upton, NY (United States); Gehin, J.C. [Oak Ridge National Lab., TN (United States)
1998-12-31
This paper presents a summary of the results obtained by all of the contributors to the Uranium Benchmark Problem of the ANS Ad hoc Committee on Reactor Physics Benchmarks. The benchmark problem was based on critical experiments which mocked-up lattices typical of PWRs. Three separate cases constituted the benchmark problem. These included a uniform lattice, an assembly-type lattice with water holes and an assembly-type lattice with pyrex rods. Calculated results were obtained from eighteen separate organizations from all over the world. Some organizations submitted more than one set of results based on different calculational methods and cross section data. Many of the most widely used assembly physics and core analysis computer codes and neutron cross section data libraries were applied by the contributors.
DEFF Research Database (Denmark)
Hougaard, Jens Leth; Tvede, Mich
2002-01-01
Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...
DEFF Research Database (Denmark)
Lawson, Lartey; Nielsen, Kurt
2005-01-01
We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....
A hybrid Monte Carlo and response matrix Monte Carlo method in criticality calculation
International Nuclear Information System (INIS)
Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix Monte Carlo (RMMC) method based on analog Monte Carlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog Monte Carlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors)
Dujko, S; White, R D; Petrović, Z Lj; Robson, R E
2010-04-01
A multiterm solution of the Boltzmann equation has been developed and used to calculate transport coefficients of charged-particle swarms in gases under the influence of electric and magnetic fields crossed at arbitrary angles when nonconservative collisions are present. The hierarchy resulting from a spherical-harmonic decomposition of the Boltzmann equation in the hydrodynamic regime is solved numerically by representing the speed dependence of the phase-space distribution function in terms of an expansion in Sonine polynomials about a Maxwellian velocity distribution at an internally determined temperature. Results are given for electron swarms in certain collisional models for ionization and attachment over a range of angles between the fields and field strengths. The implicit and explicit effects of ionization and attachment on the electron-transport coefficients are considered using physical arguments. It is found that the difference between the two sets of transport coefficients, bulk and flux, resulting from the explicit effects of nonconservative collisions, can be controlled either by the variation in the magnetic field strengths or by the angles between the fields. In addition, it is shown that the phenomena of ionization cooling and/or attachment cooling/heating previously reported for dc electric fields carry over directly to the crossed electric and magnetic fields. The results of the Boltzmann equation analysis are compared with those obtained by a Monte Carlo simulation technique. The comparison confirms the theoretical basis and numerical integrity of the moment method for solving the Boltzmann equation and gives a set of well-established data that can be used to test future codes and plasma models. PMID:20481843
An Analytical Solution for Lateral Buckling Critical Load Calculation of Leaning-Type Arch Bridge
Directory of Open Access Journals (Sweden)
Ai-rong Liu
2014-01-01
Full Text Available An analytical solution for lateral buckling critical load of leaning-type arch bridge was presented in this paper. New tangential and radial buckling models of the transverse brace between the main and stable arch ribs are established. Based on the Ritz method, the analytical solution for lateral buckling critical load of the leaning-type arch bridge with different central angles of main arch ribs and leaning arch ribs under different boundary conditions is derived for the first time. Comparison between the analytical results and the FEM calculated results shows that the analytical solution presented in this paper is sufficiently accurate. The parametric analysis results show that the lateral buckling critical load of the arch bridge with fixed boundary conditions is about 1.14 to 1.16 times as large as that of the arch bridge with hinged boundary condition. The lateral buckling critical load increases by approximately 31.5% to 41.2% when stable arch ribs are added, and the critical load increases as the inclined angle of stable arch rib increases. The differences in the center angles of the main arch rib and the stable arch rib have little effect on the lateral buckling critical load.
Benchmark Evaluation of the NRAD Reactor LEU Core Startup Measurements
Energy Technology Data Exchange (ETDEWEB)
J. D. Bess; T. L. Maddock; M. A. Marshall
2011-09-01
The Neutron Radiography (NRAD) reactor is a 250-kW TRIGA-(Training, Research, Isotope Production, General Atomics)-conversion-type reactor at the Idaho National Laboratory; it is primarily used for neutron radiography analysis of irradiated and unirradiated fuels and materials. The NRAD reactor was converted from HEU to LEU fuel with 60 fuel elements and brought critical on March 31, 2010. This configuration of the NRAD reactor has been evaluated as an acceptable benchmark experiment and is available in the 2011 editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) and the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Significant effort went into precisely characterizing all aspects of the reactor core dimensions and material properties; detailed analyses of reactor parameters minimized experimental uncertainties. The largest contributors to the total benchmark uncertainty were the 234U, 236U, Er, and Hf content in the fuel; the manganese content in the stainless steel cladding; and the unknown level of water saturation in the graphite reflector blocks. A simplified benchmark model of the NRAD reactor was prepared with a keff of 1.0012 {+-} 0.0029 (1s). Monte Carlo calculations with MCNP5 and KENO-VI and various neutron cross section libraries were performed and compared with the benchmark eigenvalue for the 60-fuel-element core configuration; all calculated eigenvalues are between 0.3 and 0.8% greater than the benchmark value. Benchmark evaluations of the NRAD reactor are beneficial in understanding biases and uncertainties affecting criticality safety analyses of storage, handling, or transportation applications with LEU-Er-Zr-H fuel.
International Nuclear Information System (INIS)
The present paper summarizes calculation results for an international benchmark proposed by the Sodium-cooled Fast Reactor core Feed-back and transient response (SFR-FT) under the framework of the Working Party on scientific issues of Reactor Systems (WPRS) of the Nuclear Energy Agency of the OECD. It focuses on the large size oxide-fueled SFR. Library effect for core performance characteristics and reactivity feedback coefficients is analyzed using sensitivity analysis. The effect of ultra-fine energy group calculation in effective cross section generation is also analyzed. The discrepancy is about 0.4% for a neutron multiplication factor by changing JENDL-4.0 with JEFF-3.1. That is about -0.1% by changing JENDL-4.0 with ENDF/B-VII.1. The main contributions to the discrepancy between JENDL-4.0 and ENDF/B-VII.1 are 240Pu capture, 238U inelastic scattering and 239Pu fission. Those to the discrepancy between JENDL-4.0 and JEFF-3.1 are 23Na inelastic scattering, 56Fe inelastic scattering, 238Pu fission, 240Pu capture, 240Pu fission, 238U inelastic scattering, 239Pu fission and 239Pu nu-value. As for the sodium void reactivity, JEFF-3.1 and ENDF/B-VII.1 underestimate by about 8% compared with JENDL-4.0. The main contributions to the discrepancy between JENDL-4.0 and ENDF/B-VII.1 and 23Na elastic scattering, 23Na inelastic scattering and 239Pu fission. That to the discrepancy between JENDL-4.0 and JEFF-3.1 is 23Na inelastic scattering. The ultra-fine energy group calculation increases the sodium void reactivity by 2%. (author)
Energy Technology Data Exchange (ETDEWEB)
Akasaka, Ryo [Faculty of Humanities, Kyushu Lutheran College, 3-12-16 Kurokami, Kumamoto 860-8520 (Japan)
2009-01-15
The critical point of the water + ammonia mixture was calculated directly from the Helmholtz free energy formulation. The calculation was performed according to the critical point criteria expressed in terms of the derivatives of the Helmholtz free energy with respect to mole numbers. Smooth critical locus linking between the critical points of pure water and ammonia was obtained. The critical locus showed a good agreement with the most reliable experimental data. Simple correlations for the critical temperature, pressure, and molar volume for a given composition were developed. The information obtained in this study is helpful for design and simulation of the cycles using the water + ammonia mixture as working fluid. (author)
Benchmark for Strategic Performance Improvement.
Gohlke, Annette
1997-01-01
Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)
Intact and Degraded Component Criticality Calculations of N Reactor Spent Nuclear Fuel
International Nuclear Information System (INIS)
The objective of this calculation is to perform intact and degraded mode criticality evaluations of the Department of Energy's (DOE) N Reactor Spent Nuclear Fuel codisposed in a 2-Defense High-Level Waste (2-DHLW)/2-Multi-Canister Overpack (MCO) Waste Package (WP) and emplaced in a monitored geologic repository (MGR) (see Attachment I). The scope of this calculation is limited to the determination of the effective neutron multiplication factor (keff) for both intact and degraded mode internal configurations of the codisposal waste package. This calculation will support the analysis that will be performed to demonstrate the technical viability for disposing of U-metal (N Reactor) spent nuclear fuel in the potential MGR
International Nuclear Information System (INIS)
The principle of corresponding state on the fluctuation structure, which is the spatial distribution of various clusters of molecules caused by density fluctuations, in supercritical states around the critical points has been investigated. In this paper, we performed Molecular Dynamics (MD) simulation to extract the fluctuation structure around the critical points of 2-Center-Lennard-Jones (2CLJ) fluids, whose characteristics change by their molecular elongations. First, we indentified some critical points of 2CLJ fluids with comparatively shorter elongations applying Lotfi's function, which correctly describes the liquid-vapor coexistence line of Lennard-Jones (LJ) fluid, and successfully defined each critical point. Next, two methods were applied in the estimation of the fluctuation structure: one is the evaluation of the dispersion of the number of molecules at a certain domain, and the other is the calculation of static structure factor. As a result, in 2CLJ fluids which have shorter molecular elongations comparatively, the principle of corresponding state is satisfied because of the small differences in the fluctuation structures extracted in the present two methods. On the other hand, some results imply that the fluctuation may decrease in 2CLJ fluids which have the longer molecular elongations although more accurate evaluation of the critical points in those fluids is necessary for the further investigation. (author)
International Nuclear Information System (INIS)
The BFS-62 critical experiments are currently used as 'benchmark' for verification of IPPE codes and nuclear data, which have been used in the study of loading a significant amount of Pu in fast reactors. The BFS-62 experiments have been performed at BFS-2 critical facility of IPPE (Obninsk). The experimental program has been arranged in such a way that the effect of replacement of uranium dioxied blanket by the steel reflector as well as the effect of replacing UOX by MOX on the main characteristics of the reactor model was studied. Wide experimental program, including measurements of the criticality-keff, spectral indices, radial and axial fission rate distributions, control rod mock-up worth, sodium void reactivity effect SVRE and some other important nuclear physics parameters, was fulfilled in the core. Series of 4 BFS-62 critical assemblies have been designed for studying the changes in BN-600 reactor physics from existing state to hybrid core. All the assemblies are modeling the reactor state prior to refueling, i.e. with all control rod mock-ups withdrawn from the core. The following items are chosen for the analysis in this report: Description of the critical assembly BFS-62-3A as the 3rd assembly in a series of 4 BFS critical assemblies studying BN-600 reactor with MOX-UOX hybrid zone and steel reflector; Development of a 3D homogeneous calculation model for the BFS-62-3A critical experiment as the mock-up of BN-600 reactor with hybrid zone and steel reflector; Evaluation of measured nuclear physics parameters keff and SVRE (sodium void reactivity effect); Preparation of adjusted equivalent measured values for keff and SVRE. Main series of calculations are performed using 3D HEX-Z diffusion code TRIGEX in 26 groups, with the ABBN-93 cross-section set. In addition, precise calculations are made, in 299 groups and Ps-approximation in scattering, by Monte-Carlo code MMKKENO and discrete ordinate code TWODANT. All calculations are based on the common system
The validity of the transport approximation in critical-size and reactivity calculations
International Nuclear Information System (INIS)
The validity of the transport approximation in critical-size and reactivity calculations. Elastically scattered neutrons are, in general, not distributed isotropically in the laboratory system, and a convenient way of taking this into account in neutron- transport calculations is to use the transport approximation. In this, the elastic cross-section is replaced by an elastic transport cross-section with an isotropic angular distribution. This leads to a considerable simplification in the neutron-transport calculation. In the present paper, the theoretical bases of the transport approximation in both one-group and many-group formalisms are given. The accuracy of the approximation is then studied in the multi-group case for a number of typical systems by means of the Sn method using the isotropic and anisotropic versions of the method, which exist as alternative options of the machine code SAINT written at Aldermaston for use on IBM-709/7090 machines. The dependence of the results of the anisotropic calculations on the number of moments used to represent the angular distributions is also examined. The results of the various calculations are discussed, and an indication is given of the types of system for which the transport approximation is adequate and of those for which it is inadequate. (author)
Energy Technology Data Exchange (ETDEWEB)
Pruet, J; Brown, D A; Descalle, M
2006-05-22
The authors describe tools developed by the Computational Nuclear Physics group for testing the quality of internally developed nuclear data and the fidelity of translations from ENDF formatted data to ENDL formatted data used by Livermore. These tests include S{sub n} calculations for the effective k value characterizing critical assemblies and for replacement coefficients of different materials embedded in the Godiva and Jezebel critical assemblies. For those assemblies and replacement materials for which reliable experimental information is available, these calculations provide an integral check on the quality of data. Because members of the ENDF and reactor communities use calculations for these same assemblies in their validation process, a comparison between their results with ENDF formatted data and their results with data translated into the ENDL format provides a strong check on the accuracy of translations. As a first application of the test suite they present a study comparing ENDL 99 and ENDF/B-V. They also consider the quality of the ENDF/B-V translation previously done by the Computational Nuclear Physics group. No significant errors are found.
Reference calculations on critical assemblies with Apollo2 code working with a fine multigroup mesh
International Nuclear Information System (INIS)
The objective of this thesis is to add to the multigroup transport code APOLLO2 the capability to perform deterministic reference calculations, for any type of reactor, using a very fine energy mesh of several thousand groups. This new reference tool allows us to validate the self-shielding model used in industrial applications, to perform depletion calculations, differential effects calculations, critical buckling calculations or to evaluate precisely data required by the self shielding model. At its origin, APOLLO2 was designed to perform routine calculations with energy meshes around one hundred groups. That is why, in the current format of cross sections libraries, almost each value of the multigroup energy transfer matrix is stored. As this format is not convenient for a high number of groups (concerning memory size), we had to search out a new format for removal matrices and consequently to modify the code. In the new format we found, only some values of removal matrices are kept (these values depend on a reconstruction precision choice), the other ones being reconstructed by a linear interpolation, what reduces the size of these matrices. Then we had to show that APOLLO2 working with a fine multigroup mesh had the capability to perform reference calculations on any assembly geometry. For that, we successfully carried out the validation with several calculations for which we compared APOLLO2 results (obtained with the universal mesh of 11276 groups) to results obtained with Monte Carlo codes (MCNP, TRIPOLI4). Physical analysis led with this new tool have been very fruitful and show a great potential for such an R and D tool. (author)
Benchmarking and Performance Management
Directory of Open Access Journals (Sweden)
Adrian TANTAU
2010-12-01
Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.
Criticality calculations for a spent fuel storage pool for a BWR type reactor
International Nuclear Information System (INIS)
In this work, the methodology for the calculation of the constant of effective multiplication for the arrangement of spent fuel assemblies in the pool of a BWR type reactor is shown. Calculations were done for the pool of spent fuel specified in FSAR and for the assemblies that is thought a conservative composition of high enrichment and without Gadolinium, giving credit to the stainless steel boxes of the frames that keep the assemblies. To carry out this simulation, RECORD and MIXQUIC codes were used. With record code, macroscopic cross sections, two energy groups, for the characteristics of the thought assemblies were obtained. Cross sections, as well as the dimensions of the frames that keep the fuel assemblies were used as input data for MIXQUIC code. With this code, criticality calculations in two dimensions were done, supposing that there is not leak of neutrons along the axial of the main line. Additional calculations, supposing changes in the temperature, distance among fuel assemblies and the thickness of the stainless steel box of the frame were done. The obtained results, including the effect in tolerances due to temperature, weight and thickness, show that the arrangement in the pool, when frames are fully charged, is subcritical by less than 5% in δK. (Author)
The Criticality Calculation Of Fission Yield Of U-235 Solution And Its Radiation Dose
International Nuclear Information System (INIS)
The calculation assesment of fission yield of U-235 solution in the extraction and evaporation units has been performed for the prediction of that when the criticality accident occurs in the production of fuel element for the research reactor. The Grover Tuck and fission distribution probability methods are used in this case. The calculation result using the fission distribution probability methods show the fission of 2,7 x 1018 for the uranium concentration of 200 grams/litre and that of 2,5 x 1018 fissions for U of 40 grams/litre in the extraction unit. The calculation results from the evaporation unit revealed the fission of 3,1 x 1018 for 400 grams/litre uranium and 1,77 x 1018 fissions for 80 grams/litre uranium. Using the Grover Tuck calculation method give results that 8,267 x 1017 fissions and 2,878 x 1017 fissions respectively. Radiation dose of 200 gram/litre solution is about 1450,29 Rad for neutron and 4785,96 Rad for gamma ray
Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui
2004-01-01
A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks.
Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui
2004-01-01
A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks. PMID:15032545
Podymaka, Valerii I.; Novohretskyi, Serhii M.
2014-01-01
The dynamic stability calculation method of the parallel operation of the marine synchronous generators is examined. The improved calculations algorithm of the critical time of the short circuit switch-off is offeted which takes into account salient-pole rotor of the synchronous generator. The results of calculations which prove efficiency of using the proposed algorithm, are presented.
Directory of Open Access Journals (Sweden)
Daniela Niculescu
2016-02-01
Full Text Available Organisational culture and employee engagement have been the focus of recent broad-based research efforts. Adding this concern to the revealed importance of performance indicators on human capital, and that their use is getting momentum, in order to attach financial values to knowledge management assets, it becomes more and more critical to measure human capital value. Key for Romanian FSO’s managers becomes to consider that both human and financial values have a focus on adding value in every process and function in the organisation, and to perpetuate organisational profitability by the corporate culture, on the one hand, where culture is a powerful factor that helps a company to engage, on the other hand, talented people. There is a substantial concern on using ROI on Learning and Development programmes, but whilst this is still declared, Romanian FSOs do not yet have a consistent method to measure it. This study is showing the criticality of connecting people to financial results and data analysis suggests that ROI calculation has a positive impact on creating and fostering a powerful organisational culture and that employees’ awareness of ROI values within their organisation has a powerful effect on their sense of engagement. Our findings have a more practical implication for the analysed industry by shaping a formal ROI measurement mechanisms blueprint, an ROI calculation model for the Romanian FSOs, in the form of a mechanism that could be employed when considering the design of an ROI Methodology for Romanian FSOs.
Critical groups vs. representative person: dose calculations due to predicted releases from USEXA
Energy Technology Data Exchange (ETDEWEB)
Ferreira, N.L.D., E-mail: nelson.luiz@ctmsp.mar.mil.br [Centro Tecnologico da Marinha (CTM/SP), Sao Paulo, SP (Brazil); Rochedo, E.R.R., E-mail: elainerochedo@gmail.com [Instituto de Radiprotecao e Dosimetria (lRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Mazzilli, B.P., E-mail: mazzilli@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2013-07-01
The critical group cf Centro Experimental Aramar (CEA) site was previously defined based 00 the effluents releases to the environment resulting from the facilities already operational at CEA. In this work, effective doses are calculated to members of the critical group considering the predicted potential uranium releases from the Uranium Hexafluoride Production Plant (USEXA). Basically, this work studies the behavior of the resulting doses related to the type of habit data used in the analysis and two distinct situations are considered: (a) the utilization of average values obtained from official institutions (IBGE, IEA-SP, CNEN, IAEA) and from the literature; and (b) the utilization of the 95{sup tb} percentile of the values derived from distributions fit to the obtained habit data. The first option corresponds to the way that data was used for the definition of the critical group of CEA done in former assessments, while the second one corresponds to the use of data in deterministic assessments, as recommended by ICRP to estimate doses to the so--called 'representative person' . (author)
Performance of IRI-based ionospheric critical frequency calculations with reference to forecasting
ÜNal, Ä.°Brahim; ŞEnalp, Erdem Türker; YeşIl, Ali; Tulunay, Ersin; Tulunay, Yurdanur
2011-02-01
Ionospheric critical frequency (foF2) is an important ionospheric parameter in telecommunication. Ionospheric processes are highly nonlinear and time varying. Thus, mathematical modeling based on physical principles is extremely difficult if not impossible. The authors forecast foF2 values by using neural networks and, in parallel, they calculate foF2 values based on the IRI model. The foF2 values were forecast 1 h in advance by using the Middle East Technical University Neural Network model (METU-NN) and the work was reported previously. Since then, the METU-NN has been improved. In this paper, 1 h in advance forecast foF2 values and the calculated foF2 values have been compared with the observed values considering the Slough (51.5°N, 0.6°W), Uppsala (59.8°N, 17.6°E), and Rome (41.8°N, 12.5°E) station foF2 data. The authors have considered the models alternative to each other. The performance results of the models are promising. The METU-NN foF2 forecast errors are smaller than the calculated foF2 errors. The models may be used in parallel employing the METU-NN as the primary source for the foF2 forecasting.
International Nuclear Information System (INIS)
Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments
Benchmarking & European Sustainable Transport Policies
DEFF Research Database (Denmark)
Gudmundsson, H.
2003-01-01
, Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...... way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...
Refinement of the critical angle calculation for the contrast reversal of oil slicks under sunglint
Lu, Yingcheng; Sun, Shaojie; Zhang, Minwei; Murch, Brock; Hu, Chuanmin
2016-01-01
It has long been observed that oil slicks under sunglint can reverse their optical contrast against nearby oil-free seawater. Such a phenomenon has been described through both empirical statistical analysis of the sunglint strength and modeled theoretically using a critical angle concept. The critical angle, in this model, is the angle at which the image pixels show no or negligible contrast between oiled and nonoiled seawater. Pixels away from this critical angle show either positive or negative contrast from the oil-free pixels. Although this concept has been fully demonstrated in the published literature, its calculation needs to be further refined to take into account: (1) the different refractive indices of oil slicks (from natural seeps) and seawater and (2) atmospheric effects in the sensor-measured radiance. Using measurements from the Moderate Resolution Imaging Spectroradiometer (MODIS) over oil films in the Gulf of Mexico, we show improvement in the modeled and MODIS-derived reflectance over oil slicks originated from natural seeps after incorporating these two factors in the model. Specifically, agreement between modeled and measured sunglint reflectance is found for both negative and positive-contrasting oil slicks. These results indicate that surface roughness and reflectance from oil films can be estimated given any solar/viewing geometry and surface wind. Further, this model might be used to correct the sunglint effect on thick oil under similar illumination conditions. Once proven possible, it may allow existing laboratory-based models, which estimate oil thickness after such corrections, to be applied to remote sensing imagery.
Energy Technology Data Exchange (ETDEWEB)
A. Alsaed
2004-09-14
The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of
A case study and critical assessment in calculating power usage effectiveness for a data centre
International Nuclear Information System (INIS)
Highlights: • A case study PUE calculation is carried out on a data centre by using open source specifications. • The PUE metric does not drive improvements in the efficiencies of IT processes. • The PUE does not fairly represent energy use; an increase in IT load can lead to a decrease in the PUE. • Once a low PUE is achieved, power supply efficiency and IT load have the greatest impact on its value. - Abstract: Metrics commonly used to assess the energy efficiency of data centres are analysed through performing and critiquing a case study calculation of energy efficiency. Specifically, the metric Power Usage Effectiveness (PUE), which has become a de facto standard within the data centre industry, will be assessed. This is achieved by using open source specifications for a data centre in Prineville, Oregon, USA provided by the Open Compute Project launched by the social networking company Facebook. The usefulness of the PUE metric to the IT industry is critically assessed and it is found that whilst it is important for encouraging lower energy consumption in data centres, it does not represent an unambiguous measure of energy efficiency
Jaenisch, G.-R.; Deresch, A.; Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.
2015-03-01
The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.
2016-06-10
Under the Medicare Shared Savings Program (Shared Savings Program), providers of services and suppliers that participate in an Accountable Care Organization (ACO) continue to receive traditional Medicare fee-for-service (FFS) payments under Parts A and B, but the ACO may be eligible to receive a shared savings payment if it meets specified quality and savings requirements. This final rule addresses changes to the Shared Savings Program, including: Modifications to the program's benchmarking methodology, when resetting (rebasing) the ACO's benchmark for a second or subsequent agreement period, to encourage ACOs' continued investment in care coordination and quality improvement; an alternative participation option to encourage ACOs to enter performance-based risk arrangements earlier in their participation under the program; and policies for reopening of payment determinations to make corrections after financial calculations have been performed and ACO shared savings and shared losses for a performance year have been determined. PMID:27295736
2016-06-10
Under the Medicare Shared Savings Program (Shared Savings Program), providers of services and suppliers that participate in an Accountable Care Organization (ACO) continue to receive traditional Medicare fee-for-service (FFS) payments under Parts A and B, but the ACO may be eligible to receive a shared savings payment if it meets specified quality and savings requirements. This final rule addresses changes to the Shared Savings Program, including: Modifications to the program's benchmarking methodology, when resetting (rebasing) the ACO's benchmark for a second or subsequent agreement period, to encourage ACOs' continued investment in care coordination and quality improvement; an alternative participation option to encourage ACOs to enter performance-based risk arrangements earlier in their participation under the program; and policies for reopening of payment determinations to make corrections after financial calculations have been performed and ACO shared savings and shared losses for a performance year have been determined.
International Nuclear Information System (INIS)
Systems loaded with plutonium in the form of mixed-oxide (MOX) fuel show somewhat different neutronic characteristics compared with those using conventional uranium fuels. In order to maintain adequate safety standards, it is essential to accurately predict the characteristics of MOX-fuelled systems and to further validate both the nuclear data and the computation methods used. A computation benchmark on power distribution within fuel assemblies to compare different techniques used in production codes for fine flux prediction in systems partially loaded with MOX fuel was carried out at an international level. It addressed first the numerical schemes for pin power reconstruction, then investigated the global performance including cross-section data reduction methods. This report provides the detailed results of this second phase of the benchmark. The analysis of the results revealed that basic data still need to be improved, primarily for higher plutonium isotopes and minor actinides. (author)
Energy Technology Data Exchange (ETDEWEB)
Kozmenkov, Y. [Forschungszentrum Dresden-Rossendorf, Institute of Safety Research, P.O.B. 510119, D-01314 Dresden (Germany); Kliem, S. [Forschungszentrum Dresden-Rossendorf, Institute of Safety Research, P.O.B. 510119, D-01314 Dresden (Germany)]. E-mail: S.Kliem@fzd.de; Grundmann, U. [Forschungszentrum Dresden-Rossendorf, Institute of Safety Research, P.O.B. 510119, D-01314 Dresden (Germany); Rohde, U. [Forschungszentrum Dresden-Rossendorf, Institute of Safety Research, P.O.B. 510119, D-01314 Dresden (Germany); Weiss, F.-P. [Forschungszentrum Dresden-Rossendorf, Institute of Safety Research, P.O.B. 510119, D-01314 Dresden (Germany)
2007-09-15
Plant-measured data provided by the OECD/NEA VVER-1000 coolant transient benchmark programme were used to validate the DYN3D/RELAP5 and DYN3D/ATHLET coupled code systems. Phase 1 of the benchmark (V1000CT-1) refers to an experiment that was conducted during the commissioning of the Kozloduy NPP Unit 6 in Bulgaria. In this experiment, the fourth main coolant pump was switched on whilst the remaining three were running normal operating conditions. The experiment was conducted at 27.5% of the nominal level of the reactor power. The transient is characterized by a rapid increase in the primary coolant flow through the core, and as a consequence, a decrease of the space-dependent core inlet temperature. The control rods were kept in their original positions during the entire transient. The coupled simulations performed on both DYN3D/RELAP5 and DYN3D/ATHLET were based on the same reactor model, including identical main coolant pump characteristics, boundary conditions, benchmark-specified nuclear data library and nearly identical nodalization schemes. In addition to validation of the coupled code systems against measured data, a code-to-code comparison between simulation results has also been performed to evaluate the respective thermal hydraulic models of the system codes RELAP5 and ATHLET.
Energy Technology Data Exchange (ETDEWEB)
Easter, M.E.
1985-07-01
The SCALE code system, utilizing the Monte Carlo computer code KENO V.a, was employed to calculate 37 critical experiments. The critical assemblies had /sup 235/U enrichments of 5% or less and cover a variety of geometries and materials. Values of k/sub eff/ were calculated using two different results using either of the cross-section libraries. The 16-energy-group Hansen-Roach and the 27-energy-group ENDF/B-IV cross-section libraries, available in SCALE, were used in this validation study, and both give good results for the experiments considered. It is concluded that the code and cross sections are adequate for low-enriched uranium systems and that reliable criticality safety calculations can be made for such systems provided the limits of validated applicability are not exceeded.
Castillo, Horacio E.; Chamon, Claudio de C.; Fradkin, Eduardo; Goldbart, Paul M.; Mudry, Christopher
1997-01-01
The multifractal scaling exponents are calculated for the critical wave function of a two-dimensional Dirac fermion in the presence of a random magnetic field. It is shown that the problem of calculating the multifractal spectrum maps into the thermodynamics of a static particle in a random potential. The multifractal exponents are simply given in terms of thermodynamic functions, such as free energy and entropy, which are argued to be self-averaging in the thermodynamic limit. These thermody...
Rethinking benchmark dates in international relations
Buzan, Barry; Lawson, George
2014-01-01
International Relations (IR) has an ‘orthodox set’ of benchmark dates by which much of its research and teaching is organized: 1500, 1648, 1919, 1945 and 1989. This article argues that IR scholars need to question the ways in which these orthodox dates serve as internal and external points of reference, think more critically about how benchmark dates are established, and generate a revised set of benchmark dates that better reflects macro-historical international dynamics. The first part of t...
Kvantitativ benchmark - Produktionsvirksomheder
DEFF Research Database (Denmark)
Sørensen, Ole H.; Andersen, Vibeke
Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....
Kwiatkowsa, Marta; Norman, Gethin; Parker, David
2012-01-01
We present the PRISM benchmark suite: a collection of probabilistic models and property specifications, designed to facilitate testing, benchmarking and comparisons of probabilistic verification tools and implementations.
Preliminary Benchmark Evaluation of Japan’s High Temperature Engineering Test Reactor
Energy Technology Data Exchange (ETDEWEB)
John Darrell Bess
2009-05-01
A benchmark model of the initial fully-loaded start-up core critical of Japan’s High Temperature Engineering Test Reactor (HTTR) was developed to provide data in support of ongoing validation efforts of the Very High Temperature Reactor Program using publicly available resources. The HTTR is a 30 MWt test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. The benchmark was modeled using MCNP5 with various neutron cross-section libraries. An uncertainty evaluation was performed by perturbing the benchmark model and comparing the resultant eigenvalues. The calculated eigenvalues are approximately 2-3% greater than expected with an uncertainty of ±0.70%. The primary sources of uncertainty are the impurities in the core and reflector graphite. The release of additional HTTR data could effectively reduce the benchmark model uncertainties and bias. Sensitivity of the results to the graphite impurity content might imply that further evaluation of the graphite content could significantly improve calculated results. Proper characterization of graphite for future Next Generation Nuclear Power reactor designs will improve computational modeling capabilities. Current benchmarking activities include evaluation of the annular HTTR cores and assessment of the remaining start-up core physics experiments, including reactivity effects, reactivity coefficient, and reaction-rate distribution measurements. Long term benchmarking goals might include analyses of the hot zero-power critical, rise-to-power tests, and other irradiation, safety, and technical evaluations performed with the HTTR.
International Nuclear Information System (INIS)
The economics of fuel processing operations involving the handling of large quantities of fissionable materials can be significantly improved by utilizing somewhat detailed reactor physics calculation methods to establish criticality safe process parameters. These calculation techniques serve as an extension of critical experiment data to systems not specifically covered by such data and may be used to establish the safety of larger mass limits and process equipment dimensions than could be justified by conservative extrapolation of the available experimental data. This paper describes some of the calculation techniques used for this purpose at United Nuclear Corporation. Both hand calculation techniques and computerized techniques are discussed in connection with highly enriched uranium alloy water systems, poison-wrapped cylinders, fixed poison sheets in unsafe geometry tanks, and safety arrays of unmoderated uranium materials; and generally useful data obtained with these methods are presented. Comparison of these calculation methods against experimental data, and the assumptions made in applying these methods to criticality safety work, are also discussed. (author)
3-D flux distribution and criticality calculation of TRIGA Mark-II
International Nuclear Information System (INIS)
In this work, the static calculation of the (I.T.U. TRIGA Mark-II) flux distribution has been made. The three dimensional, r-θ-z, representation of the core has been used. In this representation, for different configuration, the flux distribution has been calculated depending on two group theory. The thermal-hydraulics, the poisoning effects have been ignored. The calculations have been made by using the three dimensional and multigroup code CAN. (author)
Radiation Detection Computational Benchmark Scenarios
Energy Technology Data Exchange (ETDEWEB)
Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.
2013-09-24
Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for
FRIB driver linac vacuum model and benchmarks
Durickovic, Bojan; Kersevan, Roberto; Machicoane, Guillaume
2014-01-01
The Facility for Rare Isotope Beams (FRIB) is a superconducting heavy-ion linear accelerator that is to produce rare isotopes far from stability for low energy nuclear science. In order to achieve this, its driver linac needs to achieve a very high beam current (up to 400 kW beam power), and this requirement makes vacuum levels of critical importance. Vacuum calculations have been carried out to verify that the vacuum system design meets the requirements. The modeling procedure was benchmarked by comparing models of an existing facility against measurements. In this paper, we present an overview of the methods used for FRIB vacuum calculations and simulation results for some interesting sections of the accelerator. (C) 2013 Elsevier Ltd. All rights reserved.
Revaluering benchmarking - A topical theme for the construction industry
DEFF Research Database (Denmark)
Rasmussen, Grane Mikael Gregaard
2011-01-01
Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying...... the perception of benchmarking systems as secondary and derivative and instead studying benchmarking as constitutive of social relations and as irredeemably social phenomena. I have attempted to do so in this paper by treating benchmarking using a calculative practice perspective, and describing how...... organizational relations, behaviors and actions. In closing it is briefly considered how to study the calculative practices of benchmarking....
Energy Technology Data Exchange (ETDEWEB)
Chrysos, Michael, E-mail: michel.chrysos@univ-angers.fr; Rachet, Florent [LUNAM Université, Université d’Angers, CNRS UMR 6200, Laboratoire MOLTECH-Anjou, 2 Bd Lavoisier, 49045 Angers (France); Dixneuf, Sophie [Centre du Commissariat à l’Énergie Atomique de Grenoble, Laboratoire CEA-bioMérieux, Bât 40.20, 17 rue des Martyrs, 38054 Grenoble (France)
2015-07-14
This is the long-overdue answer to the discrepancies observed between theory and experiment in Ar{sub 2} regarding both the isotropic Raman spectrum and the second refractivity virial coefficient, B{sub R} [Gaye et al., Phys. Rev. A 55, 3484 (1997)]. At the origin of this progress is the advent (posterior to 1997) of advanced computational methods for weakly interconnected neutral species at close separations. Here, we report agreement between the previously taken Raman measurements and quantum lineshapes now computed with the employ of large-scale CCSD or smartly constructed MP2 induced-polarizability data. By using these measurements as a benchmark tool, we assess the degree of performance of various other ab initio computed data for the mean polarizability α, and we show that an excellent agreement with the most recently measured value of B{sub R} is reached. We propose an even more refined model for α, which is solution of the inverse-scattering problem and whose lineshape matches exactly the measured spectrum over the entire frequency-shift range probed.
Calculation and Mapping of Critical Thresholds in Europe: Status Report 1997
Posch M; Hettelingh J-P; Smet PAM de; Downing RJ; MNV
1997-01-01
The fourth report on the Coordination Center for Effects (CCE) describes critical threshold methodologies and results which are used for the scientific support of the second nitrogen oxide protocol under the Convention on Long Range Transboundary Air Pollution of the UN-Economic Commission for Europ
DEFF Research Database (Denmark)
Tabatabaeipour, Mojtaba; Blanke, Mogens
2014-01-01
of a system. It must be guaranteed that the trajectory of a system subject to fault remains in the region of attraction (ROA) of the post-fault system during this time. This paper proposes a new algorithm to compute the critical fault recovery time for nonlinear systems with polynomial vector elds using sum...
International Nuclear Information System (INIS)
The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how these findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in
Benchmarking concentrating photovoltaic systems
Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo
2010-08-01
Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.
The graphics calculator in mathematics education: A critical review of recent research
Penglase, Marina; Arnold, Stephen
1996-04-01
The graphics calculator, sometimes referred to as the "super calculator," has sparked great interest among mathematics educators. Considered by many to be a tool which has the potential to revolutionise mathematics education, a significant amount of research has been conducted into its effectiveness as a tool for instruction and learning within precalculus and calculus courses, specifically in the study of functions, graphing and modelling. Some results suggest that these devices (a) can facilitate the learning of functions and graphing concepts and the development of spatial visualisation skills; (b) promote mathematical investigation and exploration; and (c) encourage a shift in emphasis from algebraic manipulation and proof to graphical investigation and examination of the relationship between graphical, algebraic and geometric representations. Other studies, however, indicate that there is still a need for manipulative techniques in the learning of function and graphing concepts, that the use of graphics calculators may not facilitate the learning of particular precalculus topics, and that some "de-skilling" may occur, especially among males. It is the contention of this paper, however, that much of the research in this new and important field fails to provide clear guidance or even to inform debate in adequate ways regarding the role of graphics calculators in mathematics teaching and learning. By failing to distinguish the role of the tool from that of the instructional process, many studies reviewed could be more appropriately classified as "program evaluations" rather than as research on the graphics calculator per se. Further, claims regarding the effectiveness of the graphics calculator as a tool for learning frequently fail to recognise that judgments of effectiveness result directly from existing assumptions regarding both assessment practice and student "achievement."
International Nuclear Information System (INIS)
The IAEA has facilitated an extensive programme that addresses the technical development of advanced gas cooled reactor technology. Included in this programme is the coordinated research project (CRP) on Evaluation of High Temperature Gas Cooled Reactor (HTGR) Performance, which is the focus of this TECDOC. This CRP was established to foster the sharing of research and associated technical information among participating Member States in the ongoing development of the HTGR as a future source of nuclear energy. Within it, computer codes and models were verified through actual test results from operating reactor facilities. The work carried out in the CRP involved both computational and experimental analysis at various facilities in IAEA Member States with a view to verifying computer codes and methods in particular, and to evaluating the performance of HTGRs in general. The IAEA is grateful to China, the Russian Federation and South Africa for providing their facilities and benchmark programmes in support of this CRP.
CIRCEE: a software for isodose calculations of criticality risk in evolutional environment
International Nuclear Information System (INIS)
The authors briefly present the CIRCEE software which has been developed to assess doses, dose rates and ambient dose equivalent rates induced by neutrons and gamma radiations produced by fission and secondary gamma radiations through 3D public works in a nuclear installation almost in real time. It is meant to be used in case of criticality accident. They present the methodology, the scope and domain of application, the software operating mode
Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity
International Nuclear Information System (INIS)
Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL estimates to the national scale could be developed. Specifically, we wanted to quantify CAL uncertainty under natural variability in 17 model parameters, and determine their relative contributions in predicting CAL. Results indicated that uncertainty in CAL came primarily from components of base cation weathering (BCw; 49%) and acid neutralizing capacity (46%), whereas the most critical parameters were BCw base rate (62%), soil depth (20%), and soil temperature (11%). Thus, improvements in estimates of these factors are crucial to reducing uncertainty and successfully scaling up SMBE for national assessments of CAL. - A comprehensive uncertainty analysis, with advanced techniques and full list and full value ranges of all individual parameters, was used to examine a simple mass balance model and address questions of error partition and uncertainty reduction in critical acid load estimates that were not fully answered by previous studies
Energy Technology Data Exchange (ETDEWEB)
Smith, L.A.; Renier, J.P.
1994-06-01
A module in the VENTURE reactor analysis code system, CTRLPOS, is developed to position control rods and perform control rod position criticality searches. The module is variably dimensioned so that calculations can be performed with any number of control rod banks each having any number of control rods. CTRLPOS can also calculate control rod worth curves for a single control rod or a bank of control rods. Control rod depletion can be calculated to provide radiation source terms. These radiation source terms can be used to predict radiation doses to personnel and estimate the shielding and long-term storage requirements for spent control rods. All of these operations are completely automated. The numerous features of the module are discussed in detail. The necessary input data for the CTRLPOS module is explained. Several sample problems are presented to show the flexibility of the module. The results presented with the sample problems show that the CTRLPOS module is a powerful tool which allows a wide variety of calculations to be easily performed.
International Nuclear Information System (INIS)
A module in the VENTURE reactor analysis code system, CTRLPOS, is developed to position control rods and perform control rod position criticality searches. The module is variably dimensioned so that calculations can be performed with any number of control rod banks each having any number of control rods. CTRLPOS can also calculate control rod worth curves for a single control rod or a bank of control rods. Control rod depletion can be calculated to provide radiation source terms. These radiation source terms can be used to predict radiation doses to personnel and estimate the shielding and long-term storage requirements for spent control rods. All of these operations are completely automated. The numerous features of the module are discussed in detail. The necessary input data for the CTRLPOS module is explained. Several sample problems are presented to show the flexibility of the module. The results presented with the sample problems show that the CTRLPOS module is a powerful tool which allows a wide variety of calculations to be easily performed
Improved S(α, β) tables for TRIGA criticality and reactivity feedback calculations
International Nuclear Information System (INIS)
The accurate evaluation of the reactivity feedback in TRIGA cores gained additional importance within the TRADE project. In this sub-critical system coupled with an external source the operational temperatures could reach 150 C degrees and more. The suggested reactivity feedback curve provided by General Atomics which is sufficient for core shut down analysis appears to be inaccurate in comparison with new experimental data, in particular in the range of 80-140 C degrees. The hydrogen bound in zirconium within the TRIGA fuel is the main cause for the negative reactivity safety mechanism. An immediate neutron up-scattering by hydrogen as a result of a fuel temperature increase shifts the neutron spectrum to lower fission cross-section energy range. The up-scattering of neutrons is governed by the scattering kernel which is a measure of the probability of a neutron to be scattered from its incident energy to another one and from its original angular direction to a new one after a collision. Those probabilities also known as the scattering law are presented within probability tables based on so call S(α,β) formalism which should conserve the energy and momentum laws. For bound isotopes like hydrogen in zirconium, the structure of the lattice and the intermolecular forces influence immensely the scattering probability. The current study looks at various models and parameters concerning the generation of the molecule structure dependent S(α,β) tables and their impact on the reactivity feedback temperature coefficient of the TRIGA fuel. It is shown that different computational handling of the models for internal forces within the ZrH molecule in particular for the so called phonon spectrum description leads to variation of the reactivity feedback as well as the criticality value. The influence of adopting a new scattering kernel on the fuel pin criticality value is also presented for the first 8 S resonances of U238 within the TRIGA fuel pin. (authors)
Benchmarking of human resources management
Directory of Open Access Journals (Sweden)
David M. Akinnusi
2008-12-01
Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.
International Nuclear Information System (INIS)
The objective of this calculation is to perform additional degraded mode criticality evaluations of the Department of Energy's (DOE) Fast Flux Test Facility (FFTF) Spent Nuclear Fuel (SNF) codisposed in a 5-Defense High-Level Waste (5-DHLW) Waste Package (WP). The scope of this calculation is limited to the most reactive degraded configurations of the codisposal WP with an almost intact Ident-69 container (breached and flooded but otherwise non-degraded) containing intact FFTF SNF pins. The configurations have been identified in a previous analysis (CRWMS M andO 1999a) and the present evaluations include additional relevant information that was left out of the original calculations. The additional information describes the exact distribution of fissile material in each container (DOE 2002a). The effects of the changes that have been included in the baseline design of the codisposal WP (CRWMS M andO 2000) are also investigated. The calculation determines the effective neutron multiplication factor (keff) for selected degraded mode internal configurations of the codisposal waste package. These calculations will support the demonstration of the technical viability of the design solution adopted for disposing of MOX (FFTF) spent nuclear fuel in the potential repository. This calculation is subject to the Quality Assurance Requirements and Description (QARD) (DOE 2002b) per the activity evaluation under work package number P6212310M2 in the technical work plan TWP-MGR-MD-0000101 (BSC 2002)
DEFF Research Database (Denmark)
Borri, Paola; Scaffetti, Stefano; Mørk, Jesper;
1999-01-01
The nonlinear gain response of InGaAsP bulk optical amplifiers under ultrafast optical excitation at 1.53 ìm investigated. In particular, the dependence of the gain saturation energy on the pulse duration is measured in the range of pulse durations from 150 fs to 11 ps, for different bias currents...... and lengths of the amplifier. By comparison with a theoretical model, a critical pulsewidth is inferred below which nonlinear carrier dynamics like carrier heating and spectral hole burning dominate the gain saturation....
BCG: a computer code for calculating neutron spectra and criticality in cells of fast reactors
International Nuclear Information System (INIS)
The BCG code for determining the space and energy neutron flux distribution and criticality of fast reactor cylindrical cells is discussed. The code solves the unidimensional neutron transport equation together with interface current relations at each energy point in an unionized energy grid prepared for the cell and at an arbitrary number of spatial zones. While the spatial resolution is user specified, the energy dependence of the flux distribution is resolved according to the degree of variation in the reconstruced total microscopic cross sections of the atomic species in the cell. Results for a simplified fuel cell illustrate the high resolution and accuracy that can be obtained with the code. (author)
López Fontán, J L; Costa, J; Ruso, J M; Prieto, G; Sarmiento, F
2004-02-01
The application of a statistical method, the local polynomial regression method, (LPRM), based on a nonparametric estimation of the regression function to determine the critical micelle concentration (cmc) is presented. The method is extremely flexible because it does not impose any parametric model on the subjacent structure of the data but rather allows the data to speak for themselves. Good concordance of cmc values with those obtained by other methods was found for systems in which the variation of a measured physical property with concentration showed an abrupt change. When this variation was slow, discrepancies between the values obtained by LPRM and others methods were found.
Energy Technology Data Exchange (ETDEWEB)
Lopez Fontan, J.L.; Costa, J.; Ruso, J.M.; Prieto, G. [Dept. of Applied Physics, Univ. of Santiago de Compostela, Santiago de Compostela (Spain); Sarmiento, F. [Dept. of Mathematics, Faculty of Informatics, Univ. of A Coruna, A Coruna (Spain)
2004-02-01
The application of a statistical method, the local polynomial regression method, (LPRM), based on a nonparametric estimation of the regression function to determine the critical micelle concentration (cmc) is presented. The method is extremely flexible because it does not impose any parametric model on the subjacent structure of the data but rather allows the data to speak for themselves. Good concordance of cmc values with those obtained by other methods was found for systems in which the variation of a measured physical property with concentration showed an abrupt change. When this variation was slow, discrepancies between the values obtained by LPRM and others methods were found. (orig.)
Hyun-Kyung Chung; Per Jönsson; Alexander Kramida
2013-01-01
Atomic structure and transition probabilities are fundamental physical data required in many fields of science and technology. Atomic physics codes are freely available to other community users to generate atomic data for their interest, but the quality of these data is rarely verified. This special issue addresses estimation of uncertainties in atomic structure and transition probability calculations, and discusses methods and strategies to assess and ensure the quality of theoretical atomic...
Energy Technology Data Exchange (ETDEWEB)
Smith, L.A.; Gallmeier, F.X. [Oak Ridge Institute for Science and Energy, TN (United States); Gehin, J.C. [Oak Ridge National Lab., TN (United States)] [and others
1995-05-01
The FOEHN critical experiment was analyzed to validate the use of multigroup cross sections and Oak Ridge National Laboratory neutronics computer codes in the design of the Advanced Neutron Source. The ANSL-V 99-group master cross section library was used for all the calculations. Three different critical configurations were evaluated using the multigroup KENO Monte Carlo transport code, the multigroup DORT discrete ordinates transport code, and the multigroup diffusion theory code VENTURE. The simple configuration consists of only the fuel and control elements with the heavy water reflector. The intermediate configuration includes boron endplates at the upper and lower edges of the fuel element. The complex configuration includes both the boron endplates and components in the reflector. Cross sections were processed using modules from the AMPX system. Both 99-group and 20-group cross sections were created and used in two-dimensional models of the FOEHN experiment. KENO calculations were performed using both 99-group and 20-group cross sections. The DORT and VENTURE calculations were performed using 20-group cross sections. Because the simple and intermediate configurations are azimuthally symmetric, these configurations can be explicitly modeled in R-Z geometry. Since the reflector components cannot be modeled explicitly using the current versions of these codes, three reflector component homogenization schemes were developed and evaluated for the complex configuration. Power density distributions were calculated with KENO using 99-group cross sections and with DORT and VENTURE using 20-group cross sections. The average differences between the measured values and the values calculated with the different computer codes range from 2.45 to 5.74%. The maximum differences between the measured and calculated thermal flux values for the simple and intermediate configurations are {approx} 13%, while the average differences are < 8%.
Institute of Scientific and Technical Information of China (English)
陈双涛; 赵红利; 马斌; 侯予
2012-01-01
A modularized code based on the Finite Element QZ (FEQZ) method is developed, for a better estimate of the critical speed and a more convenient method of rotor-dynamic stability analysis for a gas bearing high speed turboexpander rotor system with actual structure and application of a cryogenic turboexpander. This code is then validated by the experimental data of a gas bearing turboexpander, with a rotor diameter of 25 mm and a rated speed of 106,400 rpm. With this code, four rotors with different structures, available to the turboexpander, are parametrically analyzed by the available speed range, vibration modes and logarithmic attenuation rate. The results suggest that the rotor with a structure of two thrust collars on the system exhibits a better performance in the designed conditions.
Benchmark Evaluation of the Neutron Radiography (NRAD) Reactor Upgraded LEU-Fueled Core
Energy Technology Data Exchange (ETDEWEB)
John D. Bess
2001-09-01
Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. The final upgraded core configuration with 64 fuel elements has been completed. Evaluated benchmark measurement data include criticality, control-rod worth measurements, shutdown margin, and excess reactivity. Dominant uncertainties in keff include the manganese content and impurities contained within the stainless steel cladding of the fuel and the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 nuclear data are approximately 1.4% greater than the benchmark model eigenvalue, supporting contemporary research regarding errors in the cross section data necessary to simulate TRIGA-type reactors. Uncertainties in reactivity effects measurements are estimated to be ~10% with calculations in agreement with benchmark experiment values within 2s. The completed benchmark evaluation de-tails are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Experiments (IRPhEP Handbook). Evaluation of the NRAD LEU cores containing 56, 60, and 62 fuel elements have also been completed, including analysis of their respective reactivity effects measurements; they are also available in the IRPhEP Handbook but will not be included in this summary paper.
Jacob, D.; Palacios, J. J.
2011-01-01
We study the performance of two different electrode models in quantum transport calculations based on density functional theory: parametrized Bethe lattices and quasi-one-dimensional wires or nanowires. A detailed account of implementation details in both the cases is given. From the systematic study of nanocontacts made of representative metallic elements, we can conclude that the parametrized electrode models represent an excellent compromise between computational cost and electronic structure definition as long as the aim is to compare with experiments where the precise atomic structure of the electrodes is not relevant or defined with precision. The results obtained using parametrized Bethe lattices are essentially similar to the ones obtained with quasi-one-dimensional electrodes for large enough cross-sections of these, adding a natural smearing to the transmission curves that mimics the true nature of polycrystalline electrodes. The latter are more demanding from the computational point of view, but present the advantage of expanding the range of applicability of transport calculations to situations where the electrodes have a well-defined atomic structure, as is the case for carbon nanotubes, graphene nanoribbons, or semiconducting nanowires. All the analysis is done with the help of codes developed by the authors which can be found in the quantum transport toolbox ALACANT and are publicly available.
Zhang, Changzhe; Bu, Yuxiang
2016-09-14
Diffuse functions have been proved to be especially crucial for the accurate characterization of excess electrons which are usually bound weakly in intermolecular zones far away from the nuclei. To examine the effects of diffuse functions on the nature of the cavity-shaped excess electrons in water cluster surroundings, both the HOMO and LUMO distributions, vertical detachment energies (VDEs) and visible absorption spectra of two selected (H2O)24(-) isomers are investigated in the present work. Two main types of diffuse functions are considered in calculations including the Pople-style atom-centered diffuse functions and the ghost-atom-based floating diffuse functions. It is found that augmentation of atom-centered diffuse functions contributes to a better description of the HOMO (corresponding to the VDE convergence), in agreement with previous studies, but also leads to unreasonable diffuse characters of the LUMO with significant red-shifts in the visible spectra, which is against the conventional point of view that the more the diffuse functions, the better the results. The issue of designing extra floating functions for excess electrons has also been systematically discussed, which indicates that the floating diffuse functions are necessary not only for reducing the computational cost but also for improving both the HOMO and LUMO accuracy. Thus, the basis sets with a combination of partial atom-centered diffuse functions and floating diffuse functions are recommended for a reliable description of the weakly bound electrons. This work presents an efficient way for characterizing the electronic properties of weakly bound electrons accurately by balancing the addition of atom-centered diffuse functions and floating diffuse functions and also by balancing the computational cost and accuracy of the calculated results, and thus is very useful in the relevant calculations of various solvated electron systems and weakly bound anionic systems. PMID:27522987
Zhang, Changzhe; Bu, Yuxiang
2016-09-14
Diffuse functions have been proved to be especially crucial for the accurate characterization of excess electrons which are usually bound weakly in intermolecular zones far away from the nuclei. To examine the effects of diffuse functions on the nature of the cavity-shaped excess electrons in water cluster surroundings, both the HOMO and LUMO distributions, vertical detachment energies (VDEs) and visible absorption spectra of two selected (H2O)24(-) isomers are investigated in the present work. Two main types of diffuse functions are considered in calculations including the Pople-style atom-centered diffuse functions and the ghost-atom-based floating diffuse functions. It is found that augmentation of atom-centered diffuse functions contributes to a better description of the HOMO (corresponding to the VDE convergence), in agreement with previous studies, but also leads to unreasonable diffuse characters of the LUMO with significant red-shifts in the visible spectra, which is against the conventional point of view that the more the diffuse functions, the better the results. The issue of designing extra floating functions for excess electrons has also been systematically discussed, which indicates that the floating diffuse functions are necessary not only for reducing the computational cost but also for improving both the HOMO and LUMO accuracy. Thus, the basis sets with a combination of partial atom-centered diffuse functions and floating diffuse functions are recommended for a reliable description of the weakly bound electrons. This work presents an efficient way for characterizing the electronic properties of weakly bound electrons accurately by balancing the addition of atom-centered diffuse functions and floating diffuse functions and also by balancing the computational cost and accuracy of the calculated results, and thus is very useful in the relevant calculations of various solvated electron systems and weakly bound anionic systems.
U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in...
47 CFR 69.108 - Transport rate benchmark.
2010-10-01
... 47 Telecommunication 3 2010-10-01 2010-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone...
Kornobis, Karina; Wong, Bryan M; Lodowski, Piotr; Jaworska, Maria; Andruniów, Tadeusz; Rudd, Kenneth; Kozlowski, Pawel M; 10.1021/jp110914y
2011-01-01
Time-dependent density functional theory (TD-DFT) and correlated ab initio methods have been applied to the electronically excited states of vitamin B12 (cyanocobalamin or CNCbl). Different experimental techniques have been used to probe the excited states of CNCbl, revealing many issues that remain poorly understood from an electronic structure point of view. Due to its efficient scaling with size, TD-DFT emerges as one of the most practical tools that can be used to predict the electronic properties of these fairly complex molecules. However, the description of excited states is strongly dependent on the type of functional used in the calculations. In the present contribution, the choice of a proper functional for vitamin B12 was evaluated in terms of its agreement with both experimental results and correlated ab initio calculations. Three different functionals, i.e. B3LYP, BP86, and LC-BLYP, were tested. In addition, the effect of relative contributions of DFT and HF to the exchange-correlation functional ...
Directory of Open Access Journals (Sweden)
R. Fabík
2009-10-01
Full Text Available This paper presents a new model for calculation of critical strain for initialization of dynamic recrystallization. The new model reflects the history of forming in the deformation zone during rolling. In this region of restricted deformation, the strain rate curve for the surface of the strip exhibits two peaks. These are the two reasons why the onset of dynamic recrystallization DRX near the surface of the rolled part occurs later than in theory during strip rolling. The present model had been used in a program for simulation of forming processes with the aid of FEM and a comparison between the physical experiment and a mathematical model had been drawn.
Geothermal Heat Pump Benchmarking Report
Energy Technology Data Exchange (ETDEWEB)
None
1997-01-17
A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.
Criticality calculation in TRIGA MARK II PUSPATI Reactor using Monte Carlo code
International Nuclear Information System (INIS)
A Monte Carlo simulation of the Malaysian nuclear reactor has been performed using MCNP Version 5 code. The purpose of the work is the determination of the multiplication factor (keff) for the TRIGA Mark II research reactor in Malaysia based on Monte Carlo method. This work has been performed to calculate the value of keff for two cases, which are the control rod either fully withdrawn or fully inserted to construct a complete model of the TRIGA Mark II PUSPATI Reactor (RTP). The RTP core was modeled as close as possible to the real core and the results of keff from MCNP5 were obtained when the control fuel rods were fully inserted, the keff value indicates the RTP reactor was in the subcritical condition with a value of 0.98370±0.00054. When the control fuel rods were fully withdrawn the value of keff value indicates the RTP reactor is in the supercritical condition, that is 1.10773±0.00083. (Author)
Energy Technology Data Exchange (ETDEWEB)
Tore, C.; Ortego, P.; Rodriguez Rivada, A.
2014-07-01
The aim of this paper is the comparison between the calculated and measured decay heat of material samples which were irradiated at the Fusion Neutron Source of JAERI in Japan with D-T production of 14MeV neutrons. In the International Thermonuclear Experimental Reactor (ITER) neutron activation of the structural material will result in a source of heat after shutdown of the reactor. The estimation of decay heat value with qualified codes and nuclear data is an important parameter for the safety analyses of fusion reactors against lost of coolant accidents. When a loss of coolant and/or flow accident happen plasma facing components are heated up by decay heat. If the temperature of the components exceeds the allowable temperature, the accident would expand to loose the integrity of ITER. Uncertainties associated with decay prediction less than 15% are strongly requested by the ITER designers. Additionally, accurate decay heat prediction is required for making reasonable shutdown scenarios of ITER. (Author)
Benchmarking semantic web technology
García-Castro, R
2009-01-01
This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:
Benchmarking in University Toolbox
Directory of Open Access Journals (Sweden)
Katarzyna Kuźmicz
2015-06-01
Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.
Energy Technology Data Exchange (ETDEWEB)
Martinez-Gonzalez, Jesus S. [Univ. Politecnica de Madrid (Spain); Ade, Brian J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bowman, Stephen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gauld, Ian C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ilas, Germina [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Marshall, William BJ J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-01-01
Simulation of boiling water reactor (BWR) fuel depletion poses a challenge for nuclide inventory validation and nuclear criticality safety analyses. This challenge is due to the complex operating conditions and assembly design heterogeneities that characterize these nuclear systems. Fuel depletion simulations and in-cask criticality calculations are affected by (1) completeness of design information, (2) variability of operating conditions needed for modeling purposes, and (3) possible modeling choices. These effects must be identified, quantified, and ranked according to their significance. This paper presents an investigation of BWR fuel depletion using a complete set of actual design specifications and detailed operational data available for five operating cycles of the Swedish BWR Forsmark 3 reactor. The data includes detailed axial profiles of power, burnup, and void fraction in a very fine temporal mesh for a GE14 (10×10) fuel assembly. The specifications of this case can be used to assess the impacts of different modeling choices on inventory prediction and in-cask criticality, specifically regarding the key parameters that drive inventory and reactivity throughout fuel burnup. This study focused on the effects of the fidelity with which power history and void fraction distributions are modeled. The corresponding sensitivity of the reactivity in storage configurations is assessed, and the impacts of modeling choices on decay heat and inventory are addressed.
MCNP5 modeling of the IPR-R1 TRIGA reactor for criticality calculation and reactivity determination
Energy Technology Data Exchange (ETDEWEB)
Silva, Clarysson A.M. da, E-mail: clarysson_silva@yahoo.com.br [Departamento de Engenharia Nuclear - Escola de Engenharia, Universidade Federal de Minas Gerais, Av. Presidente Antonio Carlos, 6627, 31270-901 Campus Pampulha - Belo Horizonte (Brazil); Pereira, Claubia, E-mail: claubia@nuclear.ufmg.br [Departamento de Engenharia Nuclear - Escola de Engenharia, Universidade Federal de Minas Gerais, Av. Presidente Antonio Carlos, 6627, 31270-901 Campus Pampulha - Belo Horizonte (Brazil); Guerra, Bruno T., E-mail: brunoteixeiraguerra@yahoo.com.br [Departamento de Engenharia Nuclear - Escola de Engenharia, Universidade Federal de Minas Gerais, Av. Presidente Antonio Carlos, 6627, 31270-901 Campus Pampulha - Belo Horizonte (Brazil); Veloso, Maria Auxiliadora F., E-mail: dora@nuclear.ufmg.br [Departamento de Engenharia Nuclear - Escola de Engenharia, Universidade Federal de Minas Gerais, Av. Presidente Antonio Carlos, 6627, 31270-901 Campus Pampulha - Belo Horizonte (Brazil); Costa, Antonella L., E-mail: lombardicosta@gmail.com [Departamento de Engenharia Nuclear - Escola de Engenharia, Universidade Federal de Minas Gerais, Av. Presidente Antonio Carlos, 6627, 31270-901 Campus Pampulha - Belo Horizonte (Brazil); Dalle, Hugo M., E-mail: dallehm@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear, Comissao Nacional de Energia Nuclear, Campus da UFMG - Av. Presidente Antonio Carlos, 6627, 31270-901, P.O. Box: 941, Belo Horizonte, MG (Brazil)
2011-12-15
Highlights: Black-Right-Pointing-Pointer Two models of IPR-R1 TRIGA using the MCNP5 code were simulated. Black-Right-Pointing-Pointer It obtained k{sub eff} values in some different situations of the reactor operation. Black-Right-Pointing-Pointer The first model analyzes the criticality and the neutronic flux over the reactor. Black-Right-Pointing-Pointer The second model includes the radial and axial neutron flux evaluation with different operation conditions. Black-Right-Pointing-Pointer The results present good agreement with respect to the experimental data. - Abstract: The IPR-R1 TRIGA is a research nuclear reactor managed and located at the Nuclear Technology Development Center (CDTN) a research institute of the Brazilian Nuclear Energy Commission (CNEN). It is mainly used to radioisotopes production, scientific experiments, training of nuclear engineers for research and nuclear power plant reactor operation, experiments with materials and minerals and neutron activation analysis. In this work, criticality calculation and reactivity changes are presented and discussed using two modelings of the IPR-R1 TRIGA in the MCNP5 code. The first model (Model 1) analyzes the criticality over the reactor. On the other hand, the second model (Model 2) includes the possibility of radial and axial neutron flux evaluation with different operation conditions. The calculated results are compared with experimental data in different situations. For the two models, the standard deviation and relative error presented values of around 4.9 Multiplication-Sign 10{sup -4}. Both models present good agreement with respect to the experimental data. The goal is to validate the models that could be used to determine the neutron flux profiles to optimize the irradiation conditions, as well as to study reactivity insertion experiments and also to evaluate the fuel composition.
MCNP5 modeling of the IPR-R1 TRIGA reactor for criticality calculation and reactivity determination
International Nuclear Information System (INIS)
Highlights: ► Two models of IPR-R1 TRIGA using the MCNP5 code were simulated. ► It obtained keff values in some different situations of the reactor operation. ► The first model analyzes the criticality and the neutronic flux over the reactor. ► The second model includes the radial and axial neutron flux evaluation with different operation conditions. ► The results present good agreement with respect to the experimental data. - Abstract: The IPR-R1 TRIGA is a research nuclear reactor managed and located at the Nuclear Technology Development Center (CDTN) a research institute of the Brazilian Nuclear Energy Commission (CNEN). It is mainly used to radioisotopes production, scientific experiments, training of nuclear engineers for research and nuclear power plant reactor operation, experiments with materials and minerals and neutron activation analysis. In this work, criticality calculation and reactivity changes are presented and discussed using two modelings of the IPR-R1 TRIGA in the MCNP5 code. The first model (Model 1) analyzes the criticality over the reactor. On the other hand, the second model (Model 2) includes the possibility of radial and axial neutron flux evaluation with different operation conditions. The calculated results are compared with experimental data in different situations. For the two models, the standard deviation and relative error presented values of around 4.9 × 10−4. Both models present good agreement with respect to the experimental data. The goal is to validate the models that could be used to determine the neutron flux profiles to optimize the irradiation conditions, as well as to study reactivity insertion experiments and also to evaluate the fuel composition.
Benefits of the delta K of depletion benchmarks for burnup credit validation
International Nuclear Information System (INIS)
Pressurized Water Reactor (PWR) burnup credit validation is demonstrated using the benchmarks for quantifying fuel reactivity decrements, published as 'Benchmarks for Quantifying Fuel Reactivity Depletion Uncertainty,' EPRI Report 1022909 (August 2011). This demonstration uses the depletion module TRITON available in the SCALE 6.1 code system followed by criticality calculations using KENO-Va. The difference between the predicted depletion reactivity and the benchmark's depletion reactivity is a bias for the criticality calculations. The uncertainty in the benchmarks is the depletion reactivity uncertainty. This depletion bias and uncertainty is used with the bias and uncertainty from fresh UO2 critical experiments to determine the criticality safety limits on the neutron multiplication factor, keff. The analysis shows that SCALE 6.1 with the ENDF/B-VII 238-group cross section library supports the use of a depletion bias of only 0.0015 in delta k if cooling is ignored and 0.0025 if cooling is credited. The uncertainty in the depletion bias is 0.0064. Reliance on the ENDF/B V cross section library produces much larger disagreement with the benchmarks. The analysis covers numerous combinations of depletion and criticality options. In all cases, the historical uncertainty of 5% of the delta k of depletion ('Kopp memo') was shown to be conservative for fuel with more than 30 GWD/MTU burnup. Since this historically assumed burnup uncertainty is not a function of burnup, the Kopp memo's recommended bias and uncertainty may be exceeded at low burnups, but its absolute magnitude is small. (authors)
International Nuclear Information System (INIS)
A calculational benchmark focused on VVER-440 burnup credit, similar to that of the OECD/NEA/NSC Burnup Credit Benchmark Working Group, was proposed on the 96'AER Symposium. Its first part, CB1, was specified there whereas the second part, CB2, was specified a year later, on 97'AER Symposium in Zittau. A final statistical evaluation is presented of CB1 results and summarizes the CB2 results obtained to date. Further, the effect of an axial burnup profile of VVER-440 spent fuel on criticality ('end effect') is proposed to be studied in the CB3 benchmark problem of an infinite array of VVER-440 spent fuel rods. (author)
Development of Benchmark Examples for Delamination Onset and Fatigue Growth Prediction
Krueger, Ronald
2011-01-01
An approach for assessing the delamination propagation and growth capabilities in commercial finite element codes was developed and demonstrated for the Virtual Crack Closure Technique (VCCT) implementations in ABAQUS. The Double Cantilever Beam (DCB) specimen was chosen as an example. First, benchmark results to assess delamination propagation capabilities under static loading were created using models simulating specimens with different delamination lengths. For each delamination length modeled, the load and displacement at the load point were monitored. The mixed-mode strain energy release rate components were calculated along the delamination front across the width of the specimen. A failure index was calculated by correlating the results with the mixed-mode failure criterion of the graphite/epoxy material. The calculated critical loads and critical displacements for delamination onset for each delamination length modeled were used as a benchmark. The load/displacement relationship computed during automatic propagation should closely match the benchmark case. Second, starting from an initially straight front, the delamination was allowed to propagate based on the algorithms implemented in the commercial finite element software. The load-displacement relationship obtained from the propagation analysis results and the benchmark results were compared. Good agreements could be achieved by selecting the appropriate input parameters, which were determined in an iterative procedure.
International Nuclear Information System (INIS)
This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described
PWR experimental benchmark analysis using WIMSD and PRIDE codes
International Nuclear Information System (INIS)
Highlights: • PWR experimental benchmark calculations were performed using WIMSD and PRIDE codes. • Various models for lattice cell homogenization were used. • Multiplication factors, power distribution and reaction rates were studied. • The effect of cross section libraries on these parameters was analyzed. • The results were compared with experimental and reported results. - Abstract: The PWR experimental benchmark problem defined by ANS was analyzed using WIMSD and PRIDE codes. Different modeling methodologies were used to calculate the infinite and effective multiplication factors. Relative pin power distributions were calculated for infinite lattice and critical core configurations, while reaction ratios were calculated for infinite lattice only. The discrete ordinate method (DSN) and collision probability method (PERSEUS) were used in each calculation. Different WIMSD cross-section libraries based on ENDF/B-VI.8, ENDF/B-VII.0, IAEA, JEF-2.2, JEFF-3.1 and JENDL-3.2 nuclear data files were also employed in the analyses. Comparison was made with experimental data and other reported results in order to find a suitable strategy for PWR analysis
RISKIND verification and benchmark comparisons
International Nuclear Information System (INIS)
This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models
Energy Technology Data Exchange (ETDEWEB)
Suter, G.W. II [Oak Ridge National Lab., TN (United States); Tsao, C.L. [Duke Univ., Durham, NC (United States). School of the Environment
1996-06-01
This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more complete documentation of the sources and derivation of all values are presented.
Benchmark problems of start-up core physics of High Temperature Engineering Test Reactor (HTTR)
International Nuclear Information System (INIS)
The experimental data of the HTTRs start-up core physics are useful to verify design codes of commercial HTGRs due to the similarities in the core size and excess reactivity. Form these viewpoints, it is significant to carry out the bench mark tests of design codes by using data of start-up core physics experiments planned for the HTTR. The evaluations of the first criticality, excess reactivity of annular cores, etc., are proposed for the benchmark problem. It was found from our precalculations that diffusion calculations provide larger excess reactivity and small number of fuel columns for the first criticality than Monte Carlo calculations. 19 refs
LAPUR-K BWR stability benchmark
International Nuclear Information System (INIS)
This paper documents the stability benchmark of the LAPUR-K code using the measurements taken at the Ringhals Unit 1 plant over four cycles of operation. This benchmark was undertaken to demonstrate the ability of LAPUR-K to calculate the decay ratios for both core-wide and regional mode oscillations. This benchmark contributes significantly to assuring that LAPUR-K can be used to define the exclusion region for the Monticello Plant in response to recent US Nuclear Regulatory Commission notices concerning oscillation observed at Boiling Water Reactor plants. Stability is part of Northern States Power Reload Safety Evaluation of the Monticello Plant
Vreede, F. A.
1981-05-01
The manual of instructions for the user of the CSIR triaxial rock stress measuring equipment is critically examined. It is shown that the values of the rock stresses can be obtained from the strain gauge records by means of explicit formulae, which makes the manual's computer program obsolete. Furthermore statistical methods are proposed to check for faulty data and inhomogeneity in rock properties and virgin stress. The possibility of non-elastic behavior of the rock during the test is also checked. A new computer program based on the explicit functions and including the check calculations is presented. It is much more efficient than the one in the manual since it does not require computer sub-routines, allowing it to be used directly on any modern computer. The output of the new program is in a format suitable for direct inclusion in the report of an investigation using strain cell results.
DEFF Research Database (Denmark)
Friberg, Henrik A.
This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....
International Nuclear Information System (INIS)
The Atucha I Nuclear Power Plant (CNA-I) has enough room to store its spent fuel (SF) in damp in its two pool houses until the middle of 2015.Before that date there is the need to have an interim dry storage system for spent fuel that would make possible to empty at least one of the pools, whether to keep the plant operating if its useful life is extended, or to be able to empty the reactor core in case of decommissioning.Nucleolectrica Argentina S.A. (NA-SA) and the Comision Nacional de Energia Atomica (CNEA), due to their joint responsibility in the management of the SF, have proposed interim dry storage systems.These systems have to be evaluated in order to choose one of them by the end of 2006.In this work the Monte Carlo code MCNP was used to make the criticality and shielding calculations corresponding to the model proposed by CNEA.This model suggests the store of sealed containers with 36 or 37 SF in concrete modules.Each one of the containers is filled in the pool houses and transported to the module in a transference cask with lead walls.The results of the criticality calculations indicates that the solutions of SF proposed have widely fulfilled the requirements of subcriticality, even in supposed extreme accidental situations.Regarding the transference cask, the SF dose rate estimations allow us to make a feedback for the design aiming to the geometry and shielding improvements.Regarding the store modules, thicknesses ranges of concrete walls are suggested in order to fulfill the dose requirements stated by the Autoridad Regulatoria Nuclear Argentina
Aeroelastic Benchmark Experiments Project
National Aeronautics and Space Administration — M4 Engineering proposes to conduct canonical aeroelastic benchmark experiments. These experiments will augment existing sources for aeroelastic data in the...
International Nuclear Information System (INIS)
Monte Carlo criticality calculation allows to estimate the effective multiplication factor as well as local quantities such as local reaction rates. Some configurations presenting weak neutronic coupling (high burn up profile, complete reactor core,...) may induce biased estimations for keff or reaction rates. In order to improve robustness of the iterative Monte Carlo methods, a coupling with a deterministic code was studied. An adjoint flux is obtained by a deterministic calculation and then used in the Monte Carlo. The initial guess is then automated, the sampling of fission sites is modified and the random walk of neutrons is modified using splitting and russian roulette strategies. An automated convergence detection method has been developed. It locates and suppresses the transient due to the initialization in an output series, applied here to keff and Shannon entropy. It relies on modeling stationary series by an order 1 auto regressive process and applying statistical tests based on a Student Bridge statistics. This method can easily be extended to every output of an iterative Monte Carlo. Methods developed in this thesis are tested on different test cases. (author)
Hindar, A.; Posch, M.; Gunn, John
2000-01-01
In Sudbury, Canada, large reductions in sulfur emissions have resulted in reduced critical load exceedances and partial recovery of many lakes in the Killarney Provincial Park. The First-order Acidity Balance (FAB) model to calculate critical loads (CLs) for surface water includes the potential acidifying part of nitrogen, and takes into account the retention of nitrogen in both the terrestrial and aquatic part of the catchment. We have applied the FAB model to Killarney-lakes, and critical l...
The impact and applicability of critical experiment evaluations
Energy Technology Data Exchange (ETDEWEB)
Brewer, R. [Los Alamos National Lab., NM (United States)
1997-06-01
This paper very briefly describes a project to evaluate previously performed critical experiments. The evaluation is intended for use by criticality safety engineers to verify calculations, and may also be used to identify data which need further investigation. The evaluation process is briefly outlined; the accepted benchmark critical experiments will be used as a standard for verification and validation. The end result of the project will be a comprehensive reference document.
DEFF Research Database (Denmark)
Bogetoft, Peter; Nielsen, Kurt
2005-01-01
We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...
Blank, j.l.t.
2008-01-01
OnderzoeksrapportenArchiefTechniek, Bestuur en Management> Over faculteit> Afdelingen> Innovation Systems> IPSE> Onderzoek> Publicaties> Onderzoeksrapporten> Handleiding benchmark VO Handleiding benchmark VO 25 november 2008 door IPSE Studies Door J.L.T. Blank. Handleiding voor het lezen van de i
Benchmark af erhvervsuddannelserne
DEFF Research Database (Denmark)
Bogetoft, Peter; Wittrup, Jesper
I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt...
Toward Establishing a Realistic Benchmark for Airframe Noise Research: Issues and Challenges
Khorrami, Mehdi R.
2010-01-01
The availability of realistic benchmark configurations is essential to enable the validation of current Computational Aeroacoustic (CAA) methodologies and to further the development of new ideas and concepts that will foster the technologies of the next generation of CAA tools. The selection of a real-world configuration, the subsequent design and fabrication of an appropriate model for testing, and the acquisition of the necessarily comprehensive aeroacoustic data base are critical steps that demand great care and attention. In this paper, a brief account of the nose landing-gear configuration, being proposed jointly by NASA and the Gulfstream Aerospace Company as an airframe noise benchmark, is provided. The underlying thought processes and the resulting building block steps that were taken during the development of this benchmark case are given. Resolution of critical, yet conflicting issues is discussed - the desire to maintain geometric fidelity versus model modifications required to accommodate instrumentation; balancing model scale size versus Reynolds number effects; and time, cost, and facility availability versus important parameters like surface finish and installation effects. The decisions taken during the experimental phase of a study can significantly affect the ability of a CAA calculation to reproduce the prevalent flow conditions and associated measurements. For the nose landing gear, the most critical of such issues are highlighted and the compromises made to resolve them are discussed. The results of these compromises will be summarized by examining the positive attributes and shortcomings of this particular benchmark case.
International Nuclear Information System (INIS)
The sensitivity of the calculated critical masses of a number of simple systems, to changes in the basic neutron scattering data, have been investigated. The systems considered are spheres of 29% U235 and 93.5% U235, both bare, and reflected by thick natural uranium. The calculations have been carried out using the Carlson Sn method with 4 energy groups, and the percentage changes in the calculated critical masses of the different systems, due to specified changes in the various aspects of the neutron scattering data, have been obtained. The results are presented and discussed with particular reference to the adjustment of the basic data to give good agreement with experimental critical sizes. The basic data on which these calculations have been based are those given in AWRE Report 0-28/60. (author)
Benchmarking of human resources management
David M. Akinnusi
2008-01-01
This paper reviews the role of human resource management (HRM) which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HR...
Vries, de W.; Schütze, G.; Lofts, S.; Tipping, E.; Meili, M.; Römkens, P.F.A.M.; Groenenberg, J.E.
2005-01-01
This report on heavy metals provides up-to-date methodologies to derive critical loads for the heavy metals cadmium (Cd), lead (Pb) and mercury (Hg) for both terrestrial and aquatic ecosystems. It presents background information to a Manual on Critical Loads for those metals. Focus is given to the m
Toxicological Benchmarks for Wildlife
Energy Technology Data Exchange (ETDEWEB)
Sample, B.E. Opresko, D.M. Suter, G.W.
1993-01-01
Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red
Utilizing benchmark data from the ANL-ZPR diagnostic cores program
International Nuclear Information System (INIS)
The support of the criticality safety community is allowing the production of benchmark descriptions of several assemblies from the ZPR Diagnostic Cores Program. The assemblies have high sensitivities to nuclear data for a few isotopes. This can highlight limitations in nuclear data for selected nuclides or in standard methods used to treat these data. The present work extends the use of the simplified model of the U9 benchmark assembly beyond the validation of keff. Further simplifications have been made to produce a data testing benchmark in the style of the standard CSEWG benchmark specifications. Calculations for this data testing benchmark are compared to results obtained with more detailed models and methods to determine their biases. These biases or corrections factors can then be applied in the use of the less refined methods and models. Data testing results using Versions IV, V, and VI of the ENDF/B nuclear data are presented for keff, f28/f25, c28/f25, and βeff. These limited results demonstrate the importance of studying other integral parameters in addition to keff in trying to improve nuclear data and methods and the importance of accounting for methods and/or modeling biases when using data testing results to infer the quality of the nuclear data files
Ripphausen, Peter; Wassermann, Anne Mai; Bajorath, Jürgen
2011-10-24
Benchmark calculations are essential for the evaluation of virtual screening (VS) methods. Typically, classes of known active compounds taken from the medicinal chemistry literature are divided into reference molecules (search templates) and potential hits that are added to background databases assumed to consist of compounds not sharing this activity. Then VS calculations are carried out, and the recall of known active compounds is determined. However, conventional benchmarking is affected by a number of problems that reduce its value for method evaluation. In addition to often insufficient statistical validation and the lack of generally accepted evaluation standards, the artificial nature of typical benchmark settings is often criticized. Retrospective benchmark calculations generally overestimate the potential of VS methods and do not scale with their performance in prospective applications. In order to provide additional opportunities for benchmarking that more closely resemble practical VS conditions, we have designed a publicly available compound database (DB) of reproducible virtual screens (REPROVIS-DB) that organizes information from successful ligand-based VS applications including reference compounds, screening databases, compound selection criteria, and experimentally confirmed hits. Using the currently available 25 hand-selected compound data sets, one can attempt to reproduce successful virtual screens with other than the originally applied methods and assess their potential for practical applications.
Russian Aviation Business: Critical Areas For Benchmarking
Directory of Open Access Journals (Sweden)
Natalia Vepreva
2011-04-01
Full Text Available Russian aviation business has faced a challenge. There are only two ways to proceed - either to change quickly and effectively or stay and slowly loose positions. Best practices of production systems creation from 3 world famous production companies were analyzed in order to come to the result, which is a basic fundament for production system. Fundament consists of four major columns living under major ideologies: first, supply chain management, production and internal logistics processes with supplier-customer ideology, second, human resources management process with deployed function of personnel development, third, quality management process serving and steering the production process and forth, management structure adjusted according to the process value-based approach.
Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...
Benchmarking ~(232)Th Evaluations With KBR and Thor Experiments
Institute of Scientific and Technical Information of China (English)
无
2011-01-01
The n+232Th evaluations from CENDL-3.1, ENDF/B-Ⅶ.0, JENDL-3.3 and JENDL-4.0 were tested with KBR series and THOR benchmark from ICSBEP Handbook. THOR is Plutonium-Metal-Fast (PMF) criticality benchmark reflected with metal thorium.
DEFF Research Database (Denmark)
Seabrooke, Leonard; Wigan, Duncan
2015-01-01
Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as impo...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....
A framework for benchmarking land models
Directory of Open Access Journals (Sweden)
Y. Q. Luo
2012-10-01
Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties
Energy Technology Data Exchange (ETDEWEB)
Rowlands, John, E-mail: rowlandsjl@aol.com
2009-03-15
The core region cells of the Zebra fast critical assembly MZA comprise 14 plates in a square steel tube, with 12 cells being stacked axially to form the core section of the assembly. The cells can be modelled in different levels of detail, ranging from a three-dimensional representation in which the core (The word core is used to describe both the region of a plate containing the main material, such as plutonium, UO{sub 2} or sodium, and the region of the assembly containing fissile material cells.) and canning regions of the plates and the void gaps between the edges of the plates and the steel tube, and between tubes, are represented. Simplified models include a three-dimensional representation in which the void regions are combined with the tube material. A further simplified three-dimensional model, called the MURAL model, represents the core regions of the plates but homogenises the canning, tube material and void regions. Two types of one-dimensional slab geometry model are found in the literature, one in which the materials are homogenised within each of the three axial slab regions of a canned plate (plate core and upper and lower canning regions) and a further simplified version in which the plate is modelled as a single region, the compositions being averaged over the whole thickness of the plate, comprising the plate core material, the canning and the tube material. MONK Monte Carlo calculations have been made for each of these models, and also for the fully homogenised cells, and the k-effective values, core sodium void reactivities and reaction rate ratios are compared.
Benchmark physics experiment of metallic-fueled LMFBR at FCA
International Nuclear Information System (INIS)
A benchmark physics experiment of a metallic-fueled LMFBR was performed at Japan Atomic Energy Research Institute's Fast Critical Assembly (FCA) in order to examine availability of data and method for a design of metallic-fueled core. The nuclear data and the calculation methods used for a LMFBR core design have been improved based on the oxide fuel core experiments. A metallic-fueled core has a harder neutron spectrum than an oxide-fueled core and has typical nuclear characteristics affected by the neutron spectrum. In this study, availability of the conventional calculation method for the design of the metallic-fueled core was examined by comparing the calculation values of the nuclear characteristics with the measured values. The experimental core (FCA assembly XVI-1) was selected by referring to the conceptual design of Central Research Institute of Electric Power Industry. The calculated-to-experiment (C/E) value for keff of assembly XVI-1 was 1.001. From this, as far as the criticality the prediction accuracy of the conventional calculation for the metallic-fueled core was concluded to be similar to that of an oxide-fueled core. (author)
The KMAT: Benchmarking Knowledge Management.
de Jager, Martha
Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…
Energy Technology Data Exchange (ETDEWEB)
Masse, A. [CEA Saclay, DM2S-SERMA-CP2C, 91 - Gif-sur-Yvette (France); Abeguile, F. [Communication Systemes 22, avenue Galilee - 92350 Le Plessis Robinson (France)
2010-07-01
The authors briefly present the CIRCEE software which has been developed to assess doses, dose rates and ambient dose equivalent rates induced by neutrons and gamma radiations produced by fission and secondary gamma radiations through 3D public works in a nuclear installation almost in real time. It is meant to be used in case of criticality accident. They present the methodology, the scope and domain of application, the software operating mode
Benchmarking the Netherlands. Benchmarking for growth
International Nuclear Information System (INIS)
This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout
Benchmarking in Mobarakeh Steel Company
Directory of Open Access Journals (Sweden)
Sasan Ghasemi
2008-05-01
Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahans Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.
Computational benchmark problem for deep penetration in iron
International Nuclear Information System (INIS)
A calculational benchmark problem which is simple to model and easy to interpret is described. The benchmark consists of monoenergetic 2-, 4-, or 40-MeV neutrons normally incident upon a 3-m-thick pure iron slab. Currents, fluxes, and radiation doses are tabulated throughout the slab
Energy Technology Data Exchange (ETDEWEB)
Pena-Monferrer, C.; Miquel veyrat, A.; Munoz-Cobo, J. L.; Chiva Vicent, S.
2016-08-01
In the recent years, due, among others, the slowing down of the nuclear industry, investment in the development and validation of CFD codes, applied specifically to the problems of the nuclear industry has been seriously hampered. Thus the International Benchmark Exercise (IBE) sponsored by the OECD/NEA have been fundamental to analyze the use of CFD codes in the nuclear industry, because although these codes are mature in many fields, still exist doubts about them in critical aspects of thermohydraulic calculations, even in single-phase scenarios. The Polytechnic University of Valencia (UPV) and the Universitat Jaume I (UJI), sponsored by the Nuclear Safety Council (CSN), have actively participated in all benchmark's proposed by NEA, as in the expert meetings,. In this paper, a summary of participation in the various IBE will be held, describing the benchmark itself, the CFD model created for it, and the main conclusions. (Author)
Definition and Analysis of Heavy Water Reactor Benchmarks for Testing New Wims-D Libraries
International Nuclear Information System (INIS)
This work is part of the IAEA-WIMS Library Update Project (WLUP). A group of heavy water reactor benchmarks have been selected for testing new WIMS-D libraries, including calculations with WIMSD5B program and the analysis of results.These benchmarks cover a wide variety of reactors and conditions, from fresh fuels to high burnup, and from natural to enriched uranium.Besides, each benchmark includes variations in lattice pitch and in coolants (normally heavy water and void).Multiplication factors with critical experimental bucklings and other parameters are calculated and compared with experimental reference values.The WIMS libraries used for the calculations were generated with basic data from JEF-2.2 Rev.3 (JEF) and ENDF/B-VI iNReleaseln 5 (E6) Results obtained with WIMS-86 (W86) library, included with WIMSD5B package, from Windfrith, UK with adjusted data, are included also, for showing the improvements obtained with the new -not adjusted- libraries.The calculations with WIMSD5B were made with two methods (input program options): PIJ (two-dimension collision probability method) and DSN (one-dimension Sn method, with homogenization of materials by ring).The general conclusions are: the library based on JEF data and the DSN meted give the best results, that in average are acceptable
Benchmark Data Through The International Reactor Physics Experiment Evaluation Project (IRPHEP)
Energy Technology Data Exchange (ETDEWEB)
J. Blair Briggs; Dr. Enrico Sartori
2005-09-01
The International Reactor Physics Experiments Evaluation Project (IRPhEP) was initiated by the Organization for Economic Cooperation and Development (OECD) Nuclear Energy Agency’s (NEA) Nuclear Science Committee (NSC) in June of 2002. The IRPhEP focus is on the derivation of internationally peer reviewed benchmark models for several types of integral measurements, in addition to the critical configuration. While the benchmarks produced by the IRPhEP are of primary interest to the Reactor Physics Community, many of the benchmarks can be of significant value to the Criticality Safety and Nuclear Data Communities. Benchmarks that support the Next Generation Nuclear Plant (NGNP), for example, also support fuel manufacture, handling, transportation, and storage activities and could challenge current analytical methods. The IRPhEP is patterned after the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and is closely coordinated with the ICSBEP. This paper highlights the benchmarks that are currently being prepared by the IRPhEP that are also of interest to the Criticality Safety Community. The different types of measurements and associated benchmarks that can be expected in the first publication and beyond are described. The protocol for inclusion of IRPhEP benchmarks as ICSBEP benchmarks and for inclusion of ICSBEP benchmarks as IRPhEP benchmarks is detailed. The format for IRPhEP benchmark evaluations is described as an extension of the ICSBEP format. Benchmarks produced by the IRPhEP add new dimension to criticality safety benchmarking efforts and expand the collection of available integral benchmarks for nuclear data testing. The first publication of the "International Handbook of Evaluated Reactor Physics Benchmark Experiments" is scheduled for January of 2006.
Benchmarking for plant maintenance
Energy Technology Data Exchange (ETDEWEB)
Komonen, K.; Ahonen, T.; Kunttu, S. (VTT Technical Research Centre of Finland, Espoo (Finland))
2010-05-15
The product of the project, e-Famemain, is a new kind of tool for benchmarking, which is based on many years' research efforts within Finnish industry. It helps to evaluate plants' performance in operations and maintenance by making industrial plants comparable with the aid of statistical methods. The system is updated continually and automatically. It carries out automatically multivariate statistical analysis when data is entered into system, and many other statistical operations. Many studies within Finnish industry during the last ten years have revealed clear causalities between various performance indicators. In addition, these causalities should be taken into account when utilising benchmarking or forecasting indicator values e.g. for new investments. The benchmarking system consists of five sections: data input section, positioning section, locating differences section, best practices and planning section and finally statistical tables. (orig.)
DEFF Research Database (Denmark)
Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela
This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...
Analysis of ANS LWR physics benchmark problems.
Energy Technology Data Exchange (ETDEWEB)
Taiwo, T. A.
1998-07-29
Various Monte Carlo and deterministic solutions to the three PWR Lattice Benchmark Problems recently defined by the ANS Ad Hoc Committee on Reactor Physics Benchmarks are presented. These solutions were obtained using the VIM continuous-energy Monte Carlo code and the DIF3D/WIMS-D4M code package implemented at the Argonne National Laboratory. The code results for the K{sub eff} and relative pin power distribution are compared to measured values. Additionally, code results for the three benchmark-prescribed infinite lattice configurations are also intercompared. The results demonstrate that the codes produce very good estimates of both the K{sub eff} and power distribution for the critical core and the lattice parameters of the infinite lattice configuration.
Benchmarking for Best Practice
Zairi, Mohamed
1998-01-01
Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l
Validation of SCALE 6.2 Criticality Calculations Using KENO V.A and KENO-VI
Energy Technology Data Exchange (ETDEWEB)
Marshall, William BJ.J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rearden, Bradley T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jones, Elizabeth L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-01-01
SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. Since 1980, regulators, industry, and research institutions around the world have relied on SCALE for nuclear safety analysis and design. SCALE 6.2 provides several new capabilities and significant improvements in many existing features for criticality safety analysis.
Energy Technology Data Exchange (ETDEWEB)
Koponen, B.L.; Hampel, V.E.
1982-10-21
This compilation contains 688 complete summaries of papers on nuclear criticality safety as presented at meetings of the American Nuclear Society (ANS). The selected papers contain criticality parameters for fissile materials derived from experiments and calculations, as well as criticality safety analyses for fissile material processing, transport, and storage. The compilation was developed as a component of the Nuclear Criticality Information System (NCIS) now under development at the Lawrence Livermore National Laboratory. The compilation is presented in two volumes: Volume 1 contains a directory to the ANS Transaction volume and page number where each summary was originally published, the author concordance, and the subject concordance derived from the keyphrases in titles. Volume 2 contains-in chronological order-the full-text summaries, reproduced here by permission of the American Nuclear Society from their Transactions, volumes 1-41.
International Nuclear Information System (INIS)
This compilation contains 688 complete summaries of papers on nuclear criticality safety as presented at meetings of the American Nuclear Society (ANS). The selected papers contain criticality parameters for fissile materials derived from experiments and calculations, as well as criticality safety analyses for fissile material processing, transport, and storage. The compilation was developed as a component of the Nuclear Criticality Information System (NCIS) now under development at the Lawrence Livermore National Laboratory. The compilation is presented in two volumes: Volume 1 contains a directory to the ANS Transaction volume and page number where each summary was originally published, the author concordance, and the subject concordance derived from the keyphrases in titles. Volume 2 contains-in chronological order-the full-text summaries, reproduced here by permission of the American Nuclear Society from their Transactions, volumes 1-41
燃耗信任制临界计算中保守性因素研究%Study on the conservative factors for burnup credit criticality calculation
Institute of Scientific and Technical Information of China (English)
刘驰; 蒋校丰; 张少泓
2012-01-01
When applies the burnup credit technology to perform criticality safety analysis for spent fuel storage or transportation problems, it is important for one to confirm that all the conditions adopted are adequate to cover the severest conditions that may encounter in the engineering applications. Taking the OECD/NEA burnup credit criticality benchmarks as sample problems, we study the effect of some important factors that may affect the conservatism of the results for spent fuel system criticality safety analysis. Effects caused by different nuclides credit strategy, different cooling time and axial burnup profile are studied by use of the STARBUCS module of SCALE5. 1 software package, and related conclusions about the conservatism of these factors are%在运用燃耗信任制技术进行乏燃料储存、运输等环节的临界安全分析时,临界计算所采用的条件是否具有足够的包络性十分关键.本文借助于OECD/NEA发布的若干燃耗信任制临界安全基准题,使用SCALE5.1软件中的STARBUCS模块进行分析,对信任核素选取、乏燃料冷却时间以及端末效应等因素对乏燃料系统临界安全性的影响进行了研究,得出了各参数保守性的有关结论.
Leenstra, Ferry; Maurer, Veronika; Galea, Fabien; Bestman, Monique; Amsler-Kepalaite, Zivile; Visscher, Jeroen; Vermeij, Izak; Krimpen, Marinus
2014-01-01
Free range and organic systems expose the laying hen more to unexpected events and adverse climatic conditions than barn and cage systems. In France, The Netherlands and Switzerland the requirements for a hen suitable to produce in free range and organic systems were discussed with farmers. The farmers preferred for these systems a more ‘robust’ hen, more specifically defined as a heavier hen with good eating capacity. Benchmarking of flocks in a web-based management program in The Netherl...
Full sphere hydrodynamic and dynamo benchmarks
Marti, P.
2014-01-26
Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.
KENO-VI Primer: A Primer for Criticality Calculations with SCALE/KENO-VI Using GeeWiz
Energy Technology Data Exchange (ETDEWEB)
Bowman, Stephen M [ORNL
2008-09-01
The SCALE (Standardized Computer Analyses for Licensing Evaluation) computer software system developed at Oak Ridge National Laboratory is widely used and accepted around the world for criticality safety analyses. The well-known KENO-VI three-dimensional Monte Carlo criticality computer code is one of the primary criticality safety analysis tools in SCALE. The KENO-VI primer is designed to help a new user understand and use the SCALE/KENO-VI Monte Carlo code for nuclear criticality safety analyses. It assumes that the user has a college education in a technical field. There is no assumption of familiarity with Monte Carlo codes in general or with SCALE/KENO-VI in particular. The primer is designed to teach by example, with each example illustrating two or three features of SCALE/KENO-VI that are useful in criticality analyses. The primer is based on SCALE 6, which includes the Graphically Enhanced Editing Wizard (GeeWiz) Windows user interface. Each example uses GeeWiz to provide the framework for preparing input data and viewing output results. Starting with a Quickstart section, the primer gives an overview of the basic requirements for SCALE/KENO-VI input and allows the user to quickly run a simple criticality problem with SCALE/KENO-VI. The sections that follow Quickstart include a list of basic objectives at the beginning that identifies the goal of the section and the individual SCALE/KENO-VI features that are covered in detail in the sample problems in that section. Upon completion of the primer, a new user should be comfortable using GeeWiz to set up criticality problems in SCALE/KENO-VI. The primer provides a starting point for the criticality safety analyst who uses SCALE/KENO-VI. Complete descriptions are provided in the SCALE/KENO-VI manual. Although the primer is self-contained, it is intended as a companion volume to the SCALE/KENO-VI documentation. (The SCALE manual is provided on the SCALE installation DVD.) The primer provides specific examples of
Benchmarking Danish Industries
DEFF Research Database (Denmark)
Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette
2003-01-01
compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...
International Nuclear Information System (INIS)
The meeting of the Radiation Energy Spectra Unfolding Workshop organized by the Radiation Shielding Information Center is discussed. The plans of the unfolding code benchmarking effort to establish methods of standardization for both the few channel neutron and many channel gamma-ray and neutron spectroscopy problems are presented
Western Interstate Commission for Higher Education, 2013
2013-01-01
Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…
Bers, Trudy
2012-01-01
Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…
Benchmarking Public Procurement 2016
World Bank Group
2015-01-01
Benchmarking Public Procurement 2016 Report aims to develop actionable indicators which will help countries identify and monitor policies and regulations that impact how private sector companies do business with the government. The project builds on the Doing Business methodology and was initiated at the request of the G20 Anti-Corruption Working Group.
International Nuclear Information System (INIS)
The results of experiments executed in RRC 'Kurchatov Institute on the thermal-physical critical facility SVD are presented herein. The experiments modeled the drawing of two fuel rods to each other till touching WWER-1000 reactor in FA. The experimental model is a 7-rod bundle with the heated length of 1 m. The primary goal of experiments was to acquire the quantitative factors of the reduction in the critical heat fluxes as contrasted to the basic model (without disturbances of FA geometry) at the expense of local disturbance of a rod bundle geometry. As it follows from the experiment, the effect of decrease of the critical heat rate depends on combination of regime parameters and it makes 15% in the most unfavorable case (Authors)
Storage-Intensive Supercomputing Benchmark Study
Energy Technology Data Exchange (ETDEWEB)
Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A
2007-10-30
Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows
40 CFR 141.709 - Developing the disinfection profile and benchmark.
2010-07-01
... and benchmark. 141.709 Section 141.709 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... and benchmark. (a) Systems required to develop disinfection profiles under § 141.708 must follow the... section to calculate a disinfection benchmark. (1) For each year of profiling data collected...
Energy Technology Data Exchange (ETDEWEB)
Descalle, M; Clouse, C; Pruet, J
2009-07-28
The authors have compared calculations of critical assembly activation ratios using 3 different Monte Carlo codes and one deterministic code. There is excellent agreement. Discrepancies between the different Monte Carlo codes are the 1-2% level. Notably, the deterministic calculations with 87 groups are also in good agreement with the continuous energy Monte Carlo results. The three codes underestimate the {sup 238}U(n,f) reaction, suggesting that there is room for improvement in the evaluation, or in the evaluations of other reactions influencing the spectrum in BigTen. Until statistical uncertainties are implemented in Mercury, they strongly advise long runs to guarantee sufficient convergence of the flux at high energies, and they strongly encourage comparing Mercury results to a well-developed and documented code such as MCNP5 and/or COG. It may be that ENDL2008 will be available for use in COG within a year. Finally, it may be worthwhile to add a 'standard' reaction rate tally similar to those implemented in COG and MCNP5, if the goal is to expand the central fission and activation ratios simulations to include isotopes that are not part of the specifications for the assembly material composition.
Energy Technology Data Exchange (ETDEWEB)
Abanades, Alberto [Universidad Politecnica de Madrid (Spain); Aliberti, Gerardo; Gohar, Yousry; Talamo, Alberto [ANL, Argonne (United States); Bornos, Victor; Kiyavitskaya, Anna [Joint Institute of Power Eng. and Nucl. Research ' Sosny' , Minsk (Belarus); Carta, Mario [ENEA, Casaccia (Italy); Janczyszyn, Jerzy [AGH-University of Science and Technology, Krakow (Poland); Maiorino, Jose [IPEN, Sao Paulo (Brazil); Pyeon, Cheolho [Kyoto University (Japan); Stanculescu, Alexander [IAEA, Vienna (Austria); Titarenko, Yury [ITEP, Moscow (Russian Federation); Westmeier, Wolfram [Wolfram Westmeier GmbH, Ebsdorfergrund (Germany)
2008-07-01
In December 2005, the International Atomic Energy Agency (IAEA) has started a Coordinated Research Project (CRP) on 'Analytical and Experimental Benchmark Analyses of Accelerator Driven Systems'. The overall objective of the CRP, performed within the framework of the Technical Working Group on Fast Reactors (TWGFR) of IAEA's Nuclear Energy Department, is to increase the capability of interested Member States in developing and applying advanced reactor technologies in the area of long-lived radioactive waste utilization and transmutation. The specific objective of the CRP is to improve the present understanding of the coupling of an external neutron source (e.g. spallation source) with a multiplicative sub-critical core. The participants are performing computational and experimental benchmark analyses using integrated calculation schemes and simulation methods. The CRP aims at integrating some of the planned experimental demonstration projects of the coupling between a sub-critical core and an external neutron source (e.g. YALINA Booster in Belarus, and Kyoto University's Critical Assembly (KUCA)). The objective of these experimental programs is to validate computational methods, obtain high energy nuclear data, characterize the performance of sub-critical assemblies driven by external sources, and to develop and improve techniques for sub-criticality monitoring. The paper summarizes preliminary results obtained to-date for some of the CRP benchmarks. (authors)
Benchmarking i den offentlige sektor
DEFF Research Database (Denmark)
Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels
2008-01-01
I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...
Energy Technology Data Exchange (ETDEWEB)
Garcia, T.; Angeles, A.; Flores C, J., E-mail: teodoro.garcia@inin.gob.mx [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)
2013-10-15
In this work the conditions of nuclear safety were determined as much in normal conditions as in the accident event of the nuclear fuel warehouse of the reactor TRIGA Mark III of the Instituto Nacional de Investigaciones Nucleares (ININ). The warehouse contains standard fuel elements Leu - 8.5/20, a control rod with follower of standard fuel type Leu - 8.5/20, fuel elements Leu - 30/20, and the reactor fuel Sur-100. To check the subcritical state of the warehouse the effective multiplication factor (keff) was calculated. The keff calculation was carried out with the code MCNPX. (Author)
Pismensky, Artem L
2015-01-01
The method of calculation of $\\varepsilon$-expansion in model of scalar field with $\\varphi^3$-interaction based on conformal bootstrap equations is proposed. This technique is based on self-consistent skeleton equations involving full propagator and full triple vertex. Analytical computations of the Fisher's index $\\eta$ are performed in four-loop approximation. The three-loop result coincides with one obtained previously by the renormalization group equations technique based on calculation of a larger number of Feynman diagrams. The four-loop result agrees with its numerical value obtained by other authors.
Cong, Haoxi; Li, Qingmin; Xing, Jinyuan; Li, Jinsong; Chen, Qiang
2015-06-01
The prompt extinction of the secondary arc is critical to the single-phase reclosing of AC transmission lines, including half-wavelength power transmission lines. In this paper, a low-voltage physical experimental platform was established and the motion process of the secondary arc was recorded by a high-speed camera. It was found that the arcing time of the secondary arc rendered a close relationship with its arc length. Through the input and output power energy analysis of the secondary arc, a new critical length criterion for the arcing time was proposed. The arc chain model was then adopted to calculate the arcing time with both the traditional and the proposed critical length criteria, and the simulation results were compared with the experimental data. The study showed that the arcing time calculated from the new critical length criterion gave more accurate results, which can provide a reliable criterion in term of arcing time for modeling and simulation of the secondary arc related with power transmission lines. supported by National Natural Science Foundation of China (Nos. 51277061 and 51420105011)
A Benchmarking System for Domestic Water Use
Directory of Open Access Journals (Sweden)
Dexter V. L. Hunt
2014-05-01
Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.
DEFF Research Database (Denmark)
Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius
2006-01-01
An infrastructure is emerging that enables the positioning of populations of on-line, mobile service users. In step with this, research in the management of moving objects has attracted substantial attention. In particular, quite a few proposals now exist for the indexing of moving objects, and m...... of the benchmark to three spatio-temporal indexes - the TPR-, TPR*-, and Bx-trees. Representative experimental results and consequent guidelines for the usage of these indexes are reported....
Calculation and Research of Hydroplaning Critical Velocity%临界滑水速度的计算研究
Institute of Scientific and Technical Information of China (English)
李强; 张卓; 张立
2011-01-01
利用动量定理,分别计算楔角较小(楔角小于0.4°)和楔角较大两种情况下滑水速度值.以小轿车、中型汽军和载重车为例分析车轮内压、水膜厚度与滑水速度的关系.结果表明:不论楔角较小或者较大,滑水速度与车轮内压成正比；楔角较大且车轮内压一定时,滑水速度与水膜厚度成反比.通过NASA滑水速度方程,对理论计算的滑水速度值进行验证,得出它的可靠性满足要求.%The paper which made use of wedge angle of momentum caculated hydroplaning critical velocity in both cases which consists of small wedge angle( <0.4°) and larger wedge angle. Analysis of the relationship between wheel pressure, water film thickness and hydroplaning critical velocity with the exam of cars, medium cars and trucks. The results showed that; no matter small or large wedge angle, water skiing and wheel speed directly proportional to internal pressure; wedge angle is larger and the wheel pressure is constant, water-skiing and water film thickness is inversely proportional to the speed. Verifying hydroplaning critical velocity according to NASA skilling speed equation and the result show that its reliability to meet the requirements.
International Nuclear Information System (INIS)
We have developed a 3D code with two energy groups and diffusion theory that is capable of calculating eigenvalues lambda of a BWR reactor using nodal methods and boundary conditions that calculates ALBEDO NODAL-LAMBDA from the properties of the reflector code itself. The code calculates the sub-criticality of the first harmonic, which is involved in the stability against oscillations reactor out of phase, and which is needed for calculating the decay rate for data out of phase oscillations. The code is very fast and in a few seconds is able to make a calculation of the first eigenvalues and eigenvectors, discretized solving the problem with different matrix elements zero. The code uses the LAPACK and ARPACK libraries. It was necessary to modify the LAPACK library to perform various operations with five non-diagonal matrices simultaneously in order to reduce the number of calls to bookstores and simplify the procedure for calculating the matrices in compressed format CSR. The code is validated by comparing it with the results for SIMULATE different cases and making 3D BENCHMAR of the IAEA. (Author)
Energy Technology Data Exchange (ETDEWEB)
Bailey, David H.
2009-11-15
The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental
Energy Technology Data Exchange (ETDEWEB)
Aggery, A
1999-12-01
The objective of this thesis is to add to the multigroup transport code APOLLO2 the capability to perform deterministic reference calculations, for any type of reactor, using a very fine energy mesh of several thousand groups. This new reference tool allows us to validate the self-shielding model used in industrial applications, to perform depletion calculations, differential effects calculations, critical buckling calculations or to evaluate precisely data required by the self shielding model. At its origin, APOLLO2 was designed to perform routine calculations with energy meshes around one hundred groups. That is why, in the current format of cross sections libraries, almost each value of the multigroup energy transfer matrix is stored. As this format is not convenient for a high number of groups (concerning memory size), we had to search out a new format for removal matrices and consequently to modify the code. In the new format we found, only some values of removal matrices are kept (these values depend on a reconstruction precision choice), the other ones being reconstructed by a linear interpolation, what reduces the size of these matrices. Then we had to show that APOLLO2 working with a fine multigroup mesh had the capability to perform reference calculations on any assembly geometry. For that, we successfully carried out the validation with several calculations for which we compared APOLLO2 results (obtained with the universal mesh of 11276 groups) to results obtained with Monte Carlo codes (MCNP, TRIPOLI4). Physical analysis led with this new tool have been very fruitful and show a great potential for such an R and D tool. (author)
Izadi, Hoda; Grundy, Jean E; Bose, Ranjan
2012-05-01
Repeated-dose studies received by the New Substances Assessment and Control Bureau (NSACB) of Health Canada are used to provide hazard information toward risk calculation. These studies provide a point of departure (POD), traditionally the NOAEL or LOAEL, which is used to extrapolate the quantity of substance above which adverse effects can be expected in humans. This project explored the use of benchmark dose (BMD) modeling as an alternative to this approach for studies with few dose groups. Continuous data from oral repeated-dose studies for chemicals previously assessed by NSACB were reanalyzed using U.S. EPA benchmark dose software (BMDS) to determine the BMD and BMD 95% lower confidence limit (BMDL(05) ) for each endpoint critical to NOAEL or LOAEL determination for each chemical. Endpoint-specific benchmark dose-response levels , indicative of adversity, were consistently applied. An overall BMD and BMDL(05) were calculated for each chemical using the geometric mean. The POD obtained from benchmark analysis was then compared with the traditional toxicity thresholds originally used for risk assessment. The BMD and BMDL(05) generally were higher than the NOAEL, but lower than the LOAEL. BMDL(05) was generally constant at 57% of the BMD. Benchmark provided a clear advantage in health risk assessment when a LOAEL was the only POD identified, or when dose groups were widely distributed. Although the benchmark method cannot always be applied, in the selected studies with few dose groups it provided a more accurate estimate of the real no-adverse-effect level of a substance.
Validation of the MORET 5 code for criticality safety applications
International Nuclear Information System (INIS)
The MORET-5 Monte Carlo code includes 2 calculation routes: a multi-group route based on cross-sections calculated from various cell codes such a APOLLO2, DRAGON4 or SCALE, and a continuous energy calculation route. The validation of the MORET-5 code is done through the comparison between the calculated benchmark k(eff) and the experimental benchmark k(eff). If the discrepancy between these 2 k(eff) is higher than the combined standard deviation of the benchmark uncertainty and the Monte Carlo standard deviation, a bias can be identified. The criticality experimental validation database is made up of 2255 benchmarks. Concerning the multi-group approach, the present work deals only with the APOLLO2 - MORET-5 route. The APOLLO2 cell code uses a 281 energy-group structure library based on JEFF3.1. Preliminary analyses have shown that the continuous energy route using JEFF3.1 or ENDF/B-VII.0 libraries are in good agreement with the experimental k(eff) in the majority of cases. Regarding the APOLLO2 - MORET-5 calculation route, some improvements are still needed, especially for what concerns the multi-group treatment
Measurement, Standards, and Peer Benchmarking: One Hospital's Journey.
Martin, Brian S
2016-04-01
Peer-to-peer benchmarking is an important component of rapid-cycle performance improvement in patient safety and quality-improvement efforts. Institutions should carefully examine critical success factors before engagement in peer-to-peer benchmarking in order to maximize growth and change opportunities. Solutions for Patient Safety has proven to be a high-yield engagement for Children's Hospital of Pittsburgh of University of Pittsburgh Medical Center, with measureable improvement in both organizational process and culture.
Measurement, Standards, and Peer Benchmarking: One Hospital's Journey.
Martin, Brian S
2016-04-01
Peer-to-peer benchmarking is an important component of rapid-cycle performance improvement in patient safety and quality-improvement efforts. Institutions should carefully examine critical success factors before engagement in peer-to-peer benchmarking in order to maximize growth and change opportunities. Solutions for Patient Safety has proven to be a high-yield engagement for Children's Hospital of Pittsburgh of University of Pittsburgh Medical Center, with measureable improvement in both organizational process and culture. PMID:27017032
Monte Carlo Simulation of the TRIGA Mark II Benchmark Experiment with Burned Fuel
International Nuclear Information System (INIS)
Monte Carlo calculations of a criticality experiment with burned fuel on the TRIGA Mark II research reactor are presented. The main objective was to incorporate burned fuel composition calculated with the WIMSD4 deterministic code into the MCNP4B Monte Carlo code and compare the calculated keff with the measurements. The criticality experiment was performed in 1998 at the ''Jozef Stefan'' Institute TRIGA Mark II reactor in Ljubljana, Slovenia, with the same fuel elements and loading pattern as in the TRIGA criticality benchmark experiment with fresh fuel performed in 1991. The only difference was that in 1998, the fuel elements had on average burnup of ∼3%, corresponding to 1.3-MWd energy produced in the core in the period between 1991 and 1998. The fuel element burnup accumulated during 1991-1998 was calculated with the TRIGLAV in-house-developed fuel management two-dimensional multigroup diffusion code. The burned fuel isotopic composition was calculated with the WIMSD4 code and compared to the ORIGEN2 calculations. Extensive comparison of burned fuel material composition was performed for both codes for burnups up to 20% burned 235U, and the differences were evaluated in terms of reactivity. The WIMSD4 and ORIGEN2 results agreed well for all isotopes important in reactivity calculations, giving increased confidence in the WIMSD4 calculation of the burned fuel material composition. The keff calculated with the combined WIMSD4 and MCNP4B calculations showed good agreement with the experimental values. This shows that linking of WIMSD4 with MCNP4B for criticality calculations with burned fuel is feasible and gives reliable results
SINBAD: Shielding integral benchmark archive and database
International Nuclear Information System (INIS)
SINBAD is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily retrieve and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity
Energy Technology Data Exchange (ETDEWEB)
Bowman, S.M. [Oak Ridge National Lab., TN (United States); Suto, T. [Power Reactor and Nuclear Fuel Development Corp., Tokyo (Japan)]|[Oak Ridge National Lab., TN (United States)
1996-10-01
ANSI/ANS 8.1 requires that calculational methods for away-from- reactor (AFR) criticality safety analyses be validated against experiment. This report summarizes part of the ongoing effort to benchmark AFR criticality analysis methods using selected critical configurations from commercial PWRs. Codes and data in the SCALE-4 code system were used. This volume documents the SCALE system analysis of one reactor critical configuration for North Anna Unit 1 Cycle 5. The KENO V.a criticality calculations for the North Anna 1 Cycle 5 beginning-of-cycle model yielded a value for k{sub eff} of 1. 0040{+-}0.0005.
Hoppszallern, S
2001-01-01
Our fifth annual guide to benchmarking under managed care presents data that is a study in market dynamics and adaptation. New this year are financial indicators on HMOs exiting the market and those remaining. Hospital financial ratios and details on department performance are included. The physician group practice numbers show why physicians are scrutinizing capitated payments. Overall, hospitals in markets with high managed care penetration are more successful in managing labor costs and show productivity gains in imaging services, physical therapy and materials management.
Directory of Open Access Journals (Sweden)
Nicola Castoldi
2011-02-01
Full Text Available Agro-ecological indicators (AEIs allow evaluating sustainability for a large number of farms. The SITPAS Information System developed for the agricultural park “Parco Agricolo Sud Milano” (northern Italy contains detailed farming and cropping systems information for 731 farms that can be used for these analyses. We used the SITPAS database to evaluate N management with an AEI and to evaluate the suitability of the SITPAS data model for this type of applications. The AEI (soil surface N balance was calculated for each crop at field scale, as the difference between the sum of N inputs (atmospheric depositions, biological fixation, fertilisers, residues from previous crop and crop N uptake; the results were aggregated at rotation and farm levels. The farming systems with the highest surplus (> 300 kg N ha-1 are dairy, cattle and pig farms, in which chemical N fertilisers are used in addition to animal manures. The crops with the highest surplus are Italian ryegrass and maize (183 and 172 kg N ha-1, respectively, while rice and wheat have the lowest surplus (87 and 85 kg N ha-1. The data model allowed to store and analyse complex information not manageable otherwise; its main limitation was the excessive flexibility, requiring a complicated procedure for the calculations of this example, and the exclusion of most data at the farming systems level (corresponding to 82% of the studied area for missing, incomplete, out-of-range or inconsistent data. These results suggest to promote actions towards better N management in cropping systems in the Park and to develop simple data models based on minimum data requirements when sustainability evaluations are to be conducted.
Directory of Open Access Journals (Sweden)
Luca Bechini
2006-12-01
Full Text Available Agro-ecological indicators (AEIs allow evaluating sustainability for a large number of farms. The SITPAS Information System developed for the agricultural park “Parco Agricolo Sud Milano” (northern Italy contains detailed farming and cropping systems information for 731 farms that can be used for these analyses. We used the SITPAS database to evaluate N management with an AEI and to evaluate the suitability of the SITPAS data model for this type of applications. The AEI (soil surface N balance was calculated for each crop at field scale, as the difference between the sum of N inputs (atmospheric depositions, biological fixation, fertilisers, residues from previous crop and crop N uptake; the results were aggregated at rotation and farm levels. The farming systems with the highest surplus (> 300 kg N ha-1 are dairy, cattle and pig farms, in which chemical N fertilisers are used in addition to animal manures. The crops with the highest surplus are Italian ryegrass and maize (183 and 172 kg N ha-1, respectively, while rice and wheat have the lowest surplus (87 and 85 kg N ha-1. The data model allowed to store and analyse complex information not manageable otherwise; its main limitation was the excessive flexibility, requiring a complicated procedure for the calculations of this example, and the exclusion of most data at the farming systems level (corresponding to 82% of the studied area for missing, incomplete, out-of-range or inconsistent data. These results suggest to promote actions towards better N management in cropping systems in the Park and to develop simple data models based on minimum data requirements when sustainability evaluations are to be conducted.
Effects of exposure imprecision on estimation of the benchmark dose
DEFF Research Database (Denmark)
Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe
2004-01-01
In regression analysis failure to adjust for imprecision in the exposure variable is likely to lead to underestimation of the exposure effect. However, the consequences of exposure error for determination of safe doses of toxic substances have so far not received much attention. The benchmark......, then the benchmark approach produces results that are biased toward higher and less protective levels. It is therefore important to take exposure measurement error into account when calculating benchmark doses. Methods that allow this adjustment are described and illustrated in data from an epidemiological study...
NASA Software Engineering Benchmarking Study
Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.
2013-01-01
was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5
HPC Benchmark Suite NMx Project
National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...
ENDF/B-V, LIB-V, and the CSEWG benchmarks
International Nuclear Information System (INIS)
A 70-group library, LIB-V, generated with the NJOY processing code from ENDF/B-V, is tested on most of the Cross Section Evaluation Working Group (CSEWG) fast reactor benchmarks. Every experimental measurement reported in the benchmark specifications is compared to both diffusion theory and transport theory calculations. Several comparisons with prior benchmark calculations attempt to assess the effects of data and code improvements
Lecture Notes on Criticality Safety Validation Using MCNP & Whisper
International Nuclear Information System (INIS)
Training classes for nuclear criticality safety, MCNP documentation. The need for, and problems surrounding, validation of computer codes and data area considered first. Then some background for MCNP & Whisper is given--best practices for Monte Carlo criticality calculations, neutron spectra, S(α,@@) thermal neutron scattering data, nuclear data sensitivities, covariance data, and correlation coefficients. Whisper is computational software designed to assist the nuclear criticality safety analyst with validation studies with the Monte Carlo radiation transport package MCNP. Whisper's methodology (benchmark selection - Ck's, weights; extreme value theory - bias, bias uncertainty; MOS for nuclear data uncertainty - GLLS) and usage are discussed.
Energy Technology Data Exchange (ETDEWEB)
Forestier, Benoit; Miss, Joachim; Bernard, Franck; Dorval, Aurelien [Institut de Radioprotection et Surete Nucleaire, Fontenay aux Roses (France); Jacquet, Olivier [Independent consultant (France); Verboomen, Bernard [Belgian Nuclear Research Center - SCK-CEN (Belgium)
2008-07-01
The MORET code is a three dimensional Monte Carlo criticality code. It is designed to calculate the effective multiplication factor (k{sub eff}) of any geometrical configuration as well as the reaction rates in the various volumes and the neutron leakage out of the system. A recent development for the MORET code consists of the implementation of an alternate neutron tracking method, known as the pseudo-scattering tracking method. This method has been successfully implemented in the MORET code and its performances have been tested by mean of an extensive parametric study on very simple geometrical configurations. In this context, the goal of the present work is to validate the pseudo-scattering method against realistic configurations. In this perspective, pebble-bed cores are particularly well-adapted cases to model, as they exhibit large amount of volumes stochastically arranged on two different levels (the pebbles in the core and the TRISO particles inside each pebble). This paper will introduce the techniques and methods used to model pebble-bed cores in a realistic way. The results of the criticality calculations, as well as the pseudo-scattering tracking method performance in terms of computation time, will also be presented. (authors)
DEFF Research Database (Denmark)
Agrell, Per J.; Bogetoft, Peter
nchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The applica......nchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...
A PWR Thorium Pin Cell Burnup Benchmark
Energy Technology Data Exchange (ETDEWEB)
Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.
2000-05-01
As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.
Benchmarking foreign electronics technologies
Energy Technology Data Exchange (ETDEWEB)
Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.
1994-12-01
This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.
Benchmarking monthly homogenization algorithms
Directory of Open Access Journals (Sweden)
V. K. C. Venema
2011-08-01
Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.
Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve
Applicability of ZPR critical experiment data to criticality safety
International Nuclear Information System (INIS)
More than a hundred zero power reactor (ZPR) critical assemblies were constructed, over a period of about three decades, at the Argonne National Laboratory ZPR-3, ZPR-6, ZPR-9 and ZPPR fast critical assembly facilities. To be sure, the original reason for performing these critical experiments was to support fast reactor development. Nevertheless, data from some of the assemblies are well suited to form the basis for valuable, new criticality safety benchmarks. The purpose of this paper is to describe the ZPR data that would be of benefit to the criticality safety community and to explain how these data could be developed into practical criticality safety benchmarks
INTEGRAL BENCHMARK DATA FOR NUCLEAR DATA TESTING THROUGH THE ICSBEP AND THE NEWLY ORGANIZED IRPHEP
Energy Technology Data Exchange (ETDEWEB)
J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori
2007-04-01
The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) was last reported in a nuclear data conference at the International Conference on Nuclear Data for Science and Technology, ND-2004, in Santa Fe, New Mexico. Since that time the number and type of integral benchmarks have increased significantly. Included in the ICSBEP Handbook are criticality-alarm / shielding and fundamental physic benchmarks in addition to the traditional critical / subcritical benchmark data. Since ND 2004, a reactor physics counterpart to the ICSBEP, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. The IRPhEP is patterned after the ICSBEP, but focuses on other integral measurements, such as buckling, spectral characteristics, reactivity effects, reactivity coefficients, kinetics measurements, reaction-rate and power distributions, nuclide compositions, and other miscellaneous-type measurements in addition to the critical configuration. The status of these two projects is discussed and selected benchmarks highlighted in this paper.
Benchmark exercise on SBLOCA experiment of PWR PACTEL facility
International Nuclear Information System (INIS)
Highlights: • PWR PACTEL, the facility with EPR type steam generators, is introduced. • The focus of the benchmark was on the analyses of the SBLOCA test with PWR PACTEL. • System codes with several modeling approaches were utilized to analyze the test. • Proper consideration of heat and pressure losses improves simulation remarkably. - Abstract: The PWR PACTEL benchmark exercise was organized in Lappeenranta, Finland by Lappeenranta University of Technology. The benchmark consisted of two phases, i.e. a blind and an open calculation task. Seven organizations from the Czech Republic, Germany, Italy, Sweden and Finland participated in the benchmark exercise, and four system codes were utilized in the benchmark simulation tasks. Two workshops were organized for launching and concluding the benchmark, the latter of which involved presentations of the calculation results as well as discussions on the related modeling issues. The chosen experiment for the benchmark was a small break loss of coolant accident experiment which was performed to study the natural circulation behavior over a continuous range of primary side coolant inventories. For the blind calculation task, the detailed facility descriptions, the measured pressure and heat losses as well as the results of a short characterizing transient were provided. For the open calculation task part, the experiment results were released. According to the simulation results, the benchmark experiment was quite challenging to model. Several improvements were found and utilized especially for the open calculation case. The issues concerned model construction, heat and pressure losses impact, interpreting measured and calculated data, non-condensable gas effect, testing several condensation and CCFL correlations, sensitivity studies, as well as break modeling. There is a clear need for user guidelines or for a collection of best practices in modeling for every code. The benchmark offered a unique opportunity to test
Benchmark simulation models, quo vadis?
DEFF Research Database (Denmark)
Jeppsson, U.; Alex, J; Batstone, D. J.;
2013-01-01
As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...
Specification for the VERA Depletion Benchmark Suite
Energy Technology Data Exchange (ETDEWEB)
Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-12-17
CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.
Quantum benchmarks for Gaussian states
Chiribella, Giulio
2014-01-01
Teleportation and storage of continuous variable states of light and atoms are essential building blocks for the realization of large scale quantum networks. Rigorous validation of these implementations require identifying, and surpassing, benchmarks set by the most effective strategies attainable without the use of quantum resources. Such benchmarks have been established for special families of input states, like coherent states and particular subclasses of squeezed states. Here we solve the longstanding problem of defining quantum benchmarks for general pure Gaussian states with arbitrary phase, displacement, and squeezing, randomly sampled according to a realistic prior distribution. As a special case, we show that the fidelity benchmark for teleporting squeezed states with totally random phase and squeezing degree is 1/2, equal to the corresponding one for coherent states. We discuss the use of entangled resources to beat the benchmarks in experiments.
Directory of Open Access Journals (Sweden)
E. De Waele
2012-01-01
Full Text Available Nutrition is essential in critically ill patients, but translating caloric prescriptions into adequate caloric intake remains challenging. Caloric prescriptions (P, effective intake (I, and caloric needs (N, calculated with modified Harris-Benedict formulas, were recorded during seven consecutive days in ventilated patients. Adequacy of prescription was estimated by P/N ratio. I/P ratio assessed accuracy of translating a prescription into administered feeding. I/N ratio compared delivered calories with theoretical caloric needs. Fifty patients were prospectively studied in a mixed medicosurgical ICU in a teaching hospital. Basal and total energy expenditure were, respectively, 1361±171 kcal/d and 1649±233 kcal/d. P and I attained 1536±602 kcal/d and 1424±572 kcal/d, respectively. 24.6% prescriptions were accurate, and 24.3% calories were correctly administered. Excessive calories were prescribed in 35.4% of patients, 27.4% being overfed. Caloric needs were underestimated in 40% prescriptions, with 48.3% patients underfed. Calculating caloric requirements by a modified standard formula covered energy needs in only 25% of long-term mechanically ventilated patients, leaving many over- or underfed. Nutritional imbalance mainly resulted from incorrect prescription. Failure of “simple” calculations to direct caloric prescription in these patients suggests systematic use of more reliable methods, for example, indirect calorimetry.
Detailed Burnup Calculations for Testing Nuclear Data
Leszczynski, F.
2005-05-01
-section data for burnup calculations, using some of the main available evaluated nuclear data files (ENDF-B-VI-Rel.8, JEFF-3.0, JENDL-3.3), on an isotope-by-isotope basis as much as possible. The selected experimental burnup benchmarks are reference cases for LWR and HWR reactors, with analysis of isotopic composition as a function of burnup. For LWR (H2O-moderated uranium oxide lattices) four benchmarks are included: ATM-104 NEA Burnup credit criticality benchmark; Yankee-Rowe Core V; H.B.Robinson Unit 2 and Turkey Point Unit 3. For HWR (D2O-moderated uranium oxide cluster lattices), three benchmarks were selected: NPD-19-rod Fuel Clusters; Pickering-28-rod Fuel Clusters; and Bruce-37-rod Fuel Clusters. The isotopes with experimental concentration data included in these benchmarks are: Se-79, Sr90, Tc99, Ru106, Sn126, Sb125,1129, Cs133-137, Nd143, 145, Sm149-150, 152, Eul53-155, U234-235, 238, Np237, Pu238-242, Am241-243, and Cm242-248. Results and analysis of differences between calculated and measured absolute and/or relative concentrations of these isotopes for the seven benchmarks are included in this work.
Institute of Scientific and Technical Information of China (English)
李静; 宋婧; 龙鹏程; 刘鸿飞; 江平
2015-01-01
在基于蒙特卡罗粒子输运方法的反应堆模拟中，如裂变堆、聚变裂变混合堆等，达到可接受的统计误差需要大量的计算时间，这已成为蒙特卡罗方法的挑战问题之一，需通过并行计算技术解决。为解决现有方法中通信死锁的问题并保证负载均衡性，设计了基于双向遍历的临界计算并行算法。该方法基于超级蒙特卡罗核计算仿真软件系统SuperMC进行实现，以池式钠冷快堆BN600基准模型进行验证，并与MCNP进行对比。测试结果表明，串行和并行计算结果一致，且SuperMC并行效率高于MCNP。%Background: It requires much computational time with acceptable statistics errors in reactor simulations including fission reactors and fusion-fission hybrid reactors, which has become one challenge of the Monte Carlo method.Purpose: In this paper, an efficient parallel computing method was presented for resolving the communication deadlock and load balancing problem of current methods.Methods: The parallel computing method based on bi-directional traversal of criticality calculation was implemented in super Monte Carlo simulation program (SuperMC) for nuclear and radiation process. The pool-type sodium cooled fast reactor BN600 was proposed for benchmarking and was compared with MCNP.Results: Results showed that the parallel method and un-parallel methods were in agreement with each other.Conclusion: The parallel efficiency of SuperMC is higher than that of MCNP, which demonstrates the accuracy and efficiency of the parallel computing method.
BENCHMARKING ON-LINE SERVICES INDUSTRIES
Institute of Scientific and Technical Information of China (English)
John HAMILTON
2006-01-01
The Web Quality Analyser (WQA) is a new benchmarking tool for industry. It hasbeen extensively tested across services industries. Forty five critical success features are presented as measures that capture the user's perception of services industry websites. This tool differs to previous tools, in that it captures the information technology (IT) related driver sectors of website performance, along with the marketing-services related driver sectors. These driver sectors capture relevant structure, function and performance components.An 'on-off' switch measurement approach determines each component. Relevant component measures scale into a relative presence of the applicable feature, with a feature block delivering one of the sector drivers. Although it houses both measurable and a few subjective components, the WQA offers a proven and useful means to compare relevant websites.The WQA defines website strengths and weaknesses, thereby allowing for corrections to the website structure of the specific business. WQA benchmarking against services related business competitors delivers a position on the WQA index, facilitates specific website driver rating comparisons, and demonstrates where key competitive advantage may reside. This paper reports on the marketing-services driver sectors of this new benchmarking WQA tool.
Integral Benchmark Data for Nuclear Data Testing Through the ICSBEP & IRPhEP
Energy Technology Data Exchange (ETDEWEB)
J. Blair Briggs; John D. Bess; Jim Gulliford; Ian Hill
2013-10-01
The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the nuclear data community at ND2007. Since ND2007, integral benchmark data that are available for nuclear data testing have increased significantly. The status of the ICSBEP and the IRPhEP is discussed and selected benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2007 are highlighted.
Benchmark Solutions for Computational Aeroacoustics (CAA) Code Validation
Scott, James R.
2004-01-01
NASA has conducted a series of Computational Aeroacoustics (CAA) Workshops on Benchmark Problems to develop a set of realistic CAA problems that can be used for code validation. In the Third (1999) and Fourth (2003) Workshops, the single airfoil gust response problem, with real geometry effects, was included as one of the benchmark problems. Respondents were asked to calculate the airfoil RMS pressure and far-field acoustic intensity for different airfoil geometries and a wide range of gust frequencies. This paper presents the validated that have been obtained to the benchmark problem, and in addition, compares them with classical flat plate results. It is seen that airfoil geometry has a strong effect on the airfoil unsteady pressure, and a significant effect on the far-field acoustic intensity. Those parts of the benchmark problem that have not yet been adequately solved are identified and presented as a challenge to the CAA research community.
Benchmarking biofuels; Biobrandstoffen benchmarken
Energy Technology Data Exchange (ETDEWEB)
Croezen, H.; Kampman, B.; Bergsma, G.
2012-03-15
A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.
Institute of Scientific and Technical Information of China (English)
杨能勋; 金佛荣
2011-01-01
类镁离子在天体物理、等离子体物理等研究领域的研究中都有非常重要的作用.在多组态Dirac-Fock方法的框架下,考虑相对论多组态相互作用,计算出了类镁等电子数系列3s2-3s3p跃迁的各种原子结构参数,其中考虑了价电子与价电子、价电子与原子实内的电子以及原子实内电子之间的关联效应.在计算中,初态和末态不能相同,还要独立,跃迁矩阵元的赋值通过一种高效的转换方法来实现.计算所得的能级分离和跃迁几率与最近的实验数据符合得相当好,原子实内电子与价电子的关联效应与其它理论和实验值也符合得非常好.所得的大量计算结果还为以后的实验工作提供一些理论上的参考.%Results from valence-valence, core-valence and core-core multiconfiguration Dirac-Fock( MCDF) and relativistic config-uration interaction calculations (RCI) including the Breit interaction are presented for the 3s2-3s3p transition in the Mg isoelectronic sequence. In the calculations the orbital sets of the initial-state and final-state wavefunctions are not restricted to be the same, but are independently. The evaluation of the transition matrix elements is done with an efficient transformation technique. The calculated ener-gy separations and transition probabilities are found to be in good agreement with experiment and consist with other recent calculations. The calculated values including core-valence correlation are found to be similar and agree very well with other theoretical and experi-mental values. The extensive calculated values may be useful in indentifying the fine-structure levels in the experiments.
van der Sluijs, Jeroen P.; Arjan Wardekker, J.
2015-04-01
In order to enable anticipation and proactive adaptation, local decision makers increasingly seek detailed foresight about regional and local impacts of climate change. To this end, the Netherlands Models and Data-Centre implemented a pilot chain of sequentially linked models to project local climate impacts on hydrology, agriculture and nature under different national climate scenarios for a small region in the east of the Netherlands named Baakse Beek. The chain of models sequentially linked in that pilot includes a (future) weather generator and models of respectively subsurface hydrogeology, ground water stocks and flows, soil chemistry, vegetation development, crop yield and nature quality. These models typically have mismatching time step sizes and grid cell sizes. The linking of these models unavoidably involves the making of model assumptions that can hardly be validated, such as those needed to bridge the mismatches in spatial and temporal scales. Here we present and apply a method for the systematic critical appraisal of model assumptions that seeks to identify and characterize the weakest assumptions in a model chain. The critical appraisal of assumptions presented in this paper has been carried out ex-post. For the case of the climate impact model chain for Baakse Beek, the three most problematic assumptions were found to be: land use and land management kept constant over time; model linking of (daily) ground water model output to the (yearly) vegetation model around the root zone; and aggregation of daily output of the soil hydrology model into yearly input of a so called ‘mineralization reduction factor’ (calculated from annual average soil pH and daily soil hydrology) in the soil chemistry model. Overall, the method for critical appraisal of model assumptions presented and tested in this paper yields a rich qualitative insight in model uncertainty and model quality. It promotes reflectivity and learning in the modelling community, and leads to
California commercial building energy benchmarking
Energy Technology Data Exchange (ETDEWEB)
Kinney, Satkartar; Piette, Mary Ann
2003-07-01
Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the
Benchmarking in water project analysis
Griffin, Ronald C.
2008-11-01
The with/without principle of cost-benefit analysis is examined for the possible bias that it brings to water resource planning. Theory and examples for this question are established. Because benchmarking against the demonstrably low without-project hurdle can detract from economic welfare and can fail to promote efficient policy, improvement opportunities are investigated. In lieu of the traditional, without-project benchmark, a second-best-based "difference-making benchmark" is proposed. The project authorizations and modified review processes instituted by the U.S. Water Resources Development Act of 2007 may provide for renewed interest in these findings.
Current Reactor Physics Benchmark Activities at the Idaho National Laboratory
Energy Technology Data Exchange (ETDEWEB)
John D. Bess; Margaret A. Marshall; Mackenzie L. Gorham; Joseph Christensen; James C. Turnbull; Kim Clark
2011-11-01
The International Reactor Physics Experiment Evaluation Project (IRPhEP) [1] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) [2] were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.
A burnup credit calculation methodology for PWR spent fuel transportation
International Nuclear Information System (INIS)
A burnup credit calculation methodology for PWR spent fuel transportation has been developed and validated in CEA/Saclay. To perform the calculation, the spent fuel composition are first determined by the PEPIN-2 depletion analysis. Secondly the most important actinides and fission product poisons are automatically selected in PEPIN-2 according to the reactivity worth and the burnup for critically consideration. Then the 3D Monte Carlo critically code TRIMARAN-2 is used to examine the subcriticality. All the resonance self-shielded cross sections used in this calculation system are prepared with the APOLLO-2 lattice cell code. The burnup credit calculation methodology and related PWR spent fuel transportation benchmark results are reported and discussed. (authors)
International Nuclear Information System (INIS)
The System of Computerized Analysis for Licensing at Atomic industry (SCALA) is a Russian analogue of the well-known SCALE system. For criticality evaluations the ABBN-93 system is used with TWODANT and with joined American KENO and Russian MMK Monte-Carlo code MMKKENO. Using the same cross sections and input models, all these codes give results that coincide within the statistical uncertainties (for Monte-Carlo codes). Validation of criticality calculations using SCALA was performed using data presented in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. Another task of the work was to test the burnup capability of SCALA system in complex geometry in compare with other codes. Benchmark models of VVER type reactor assemblies with UO2 and MOX fuel including the cases with burnable gadolinium absorbers were calculated. KENO-VI and MMK codes were used for power distribution calculations, ORIGEN code was used for the isotopic kinetics calculations. (authors)
International Nuclear Information System (INIS)
Critical experiments with water-moderated, single-region PuO2-UO2 or UO2, and multiple-region PuO2-UO2- and UO2-fueled cores were performed at the CRX reactor critical facility at the Westinghouse Reactor Evaluation Center (WREC) at Waltz Mill, Pennsylvania in 1965 [1]. These critical experiments were part of the Saxton Plutonium Program. The mixed oxide (MOX) fuel used in these critical experiments and then loaded in the Saxton reactor contained 6.6 wt% PuO2 in a mixture of PuO2 and natural UO2. The Pu metal had the following isotopic mass percentages: 90.50% 239Pu; 8.57% 239Pu; 0.89% 240Pu; and 0.04% 241Pu. The purpose of these critical experiments was to verify the nuclear design of Saxton partial plutonium cores while obtaining parameters of fundamental significance such as buckling, control rod worth, soluble poison worth, flux, power peaking, relative pin power, and power sharing factors of MOX and UO2 lattices. For comparison purposes, the core was also loaded with uranium dioxide fuel rods only. This series is covered by experiments beginning with the designation SX
BN-600 full MOX core benchmark analysis
International Nuclear Information System (INIS)
As a follow-up of the BN-600 hybrid core benchmark, a full MOX core benchmark was performed within the framework of the IAEA co-ordinated research project. Discrepancies between the values of main reactivity coefficients obtained by the participants for the BN-600 full MOX core benchmark appear to be larger than those in the previous hybrid core benchmarks on traditional core configurations. This arises due to uncertainties in the proper modelling of the axial sodium plenum above the core. It was recognized that the sodium density coefficient strongly depends on the core model configuration of interest (hybrid core vs. fully MOX fuelled core with sodium plenum above the core) in conjunction with the calculation method (diffusion vs. transport theory). The effects of the discrepancies revealed between the participants results on the ULOF and UTOP transient behaviours of the BN-600 full MOX core were investigated in simplified transient analyses. Generally the diffusion approximation predicts more benign consequences for the ULOF accident but more hazardous ones for the UTOP accident when compared with the transport theory results. The heterogeneity effect does not have any significant effect on the simulation of the transient. The comparison of the transient analyses results concluded that the fuel Doppler coefficient and the sodium density coefficient are the two most important coefficients in understanding the ULOF transient behaviour. In particular, the uncertainty in evaluating the sodium density coefficient distribution has the largest impact on the description of reactor dynamics. This is because the maximum sodium temperature rise takes place at the top of the core and in the sodium plenum.
Water Level Superseded Benchmark Sheets
National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....
Benchmarking and Sustainable Transport Policy
DEFF Research Database (Denmark)
Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy
2004-01-01
Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for sustainable transport. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly sustainable transport...... evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark sustainable transport policies against one another would be a highly complex task, which...
Benchmarks and Quality Assurance for Online Course Development in Higher Education
Wang, Hong
2008-01-01
As online education has entered the main stream of the U.S. higher education, quality assurance in online course development has become a critical topic in distance education. This short article summarizes the major benchmarks related to online course development, listing and comparing the benchmarks of the National Education Association (NEA),…
Research on computer systems benchmarking
Smith, Alan Jay (Principal Investigator)
1996-01-01
This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.
Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments
Energy Technology Data Exchange (ETDEWEB)
Pevey, Ronald E.
2005-09-15
Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.
Energy Technology Data Exchange (ETDEWEB)
Reverdy, L
1999-07-01
Nowadays optimization goes with everything. So French engineering firms try to demonstrate that fuel transport casks and storage pools are able to receive assemblies with higher {sup 235}U initial enrichments. Fuel Burnup distribution contributes to demonstrate it. This instruction has to elaborate a way to take credit of burnup effects on criticality safety designs. The calculation codes used are CESAR 4.21-APOLLO 1-MORET III. The assembly studied (UO{sub 2}) is irradiated in a French Pressurized Water Reactor like EDF nuclear power reactor: PWR 1300 MWe, 17 x 17 array. Its initial enrichment in {sup 235}U equals 4.5%. The studies exposed in this report have evaluated the effects of: (i) the 15 fission products considered in Burnup Credit ({sup 95}Mo, {sup 99}Tc, {sup 101}Ru, {sup 103}Rh, {sup 109}Ag, {sup 133}Cs, {sup 143}Nd, {sup 145}Nd, {sup 147}Sm, {sup 149}Sm, {sup 150}Sm, {sup 151}Sm, {sup 152}Sm, {sup 153}Eu, {sup 155}Gd), (ii) the calculated abundances corrected or not by fixed factors, (iii) the choice of one cross sections library used by CESAR 4.21, (iv) the zone number elected in the axial burnup distribution zoning, (v) the kind of cut applied on (regular/optimized). Two axial distribution profiles are studied: one with 44 GWd/t average burnup, the other with 20 GWd/t average burnup. The second one considers a shallow control rods insertion in the upper limit of the assembly. The results show a margin in reactivity about 0.045 with consideration of the 6 most absorbent fission products ({sup 103}Rh, {sup 133}Cs, {sup 143}Nd, {sup 149}Sm, {sup 152}Sm, {sup 155}Gd), and about 0.06 for all Burnup Credit fission products whole. Those results have been calculated with an average burnup of 44 GWj/t. In a conservative approach, corrective factors must be apply on the abundance of some fission products. The cross sections library used by CESAR 4.21 (BBL 4) is sufficient and gives satisfactory results. The zoning of the assembly axial distribution burnup in 9
Energy Technology Data Exchange (ETDEWEB)
Poullot, G.; Dumont, V.; Anno, J.; Cousinou, P. [Institut de Radioprotection et de Surete Nucleaire (IRSN), 92 - Fontenay aux Roses (France); Grivot, P.; Girault, E.; Fouillaud, P.; Barbry, F. [CEA Valduc, 21 - Is-sur-Tille (France)
2003-07-01
The group ' International Criticality Safety Evaluation Benchmark evaluation project ' (I.C.S.B.E.P.) has for aim to supply to the international community experiments of benchmarks criticality, of certified quality, used to guarantee the qualification of criticality calculation codes. Have been defined: a structure of experiments classification, a format of standard presentation, a structure of work with evaluation, internal and external checks, presentation in plenary session. After favourable opinion of the work group, the synthesis document called evaluation is integrated to the general report I.C.S.B.E.P. (N.C.)
Energy Technology Data Exchange (ETDEWEB)
John D. Bess; J. Blair Briggs; Jim Gulliford; Ian Hill
2014-10-01
The International Reactor Physics Experiment Evaluation Project (IRPhEP) is a widely recognized world class program. The work of the IRPhEP is documented in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Integral data from the IRPhEP Handbook is used by reactor safety and design, nuclear data, criticality safety, and analytical methods development specialists, worldwide, to perform necessary validations of their calculational techniques. The IRPhEP Handbook is among the most frequently quoted reference in the nuclear industry and is expected to be a valuable resource for future decades.
Climate Benchmark Missions: CLARREO
Wielicki, Bruce A.; Young, David F.
2010-01-01
CLARREO (Climate Absolute Radiance and Refractivity Observatory) is one of the four Tier 1 missions recommended by the recent NRC decadal survey report on Earth Science and Applications from Space (NRC, 2007). The CLARREO mission addresses the need to rigorously observe climate change on decade time scales and to use decadal change observations as the most critical method to determine the accuracy of climate change projections such as those used in the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4). A rigorously known accuracy of both decadal change observations as well as climate projections is critical in order to enable sound policy decisions. The CLARREO mission accomplishes this critical objective through highly accurate and SI traceable decadal change observations sensitive to many of the key uncertainties in climate radiative forcings, responses, and feedbacks that in turn drive uncertainty in current climate model projections. The same uncertainties also lead to uncertainty in attribution of climate change to anthropogenic forcing. The CLARREO breakthrough in decadal climate change observations is to achieve the required levels of accuracy and traceability to SI standards for a set of observations sensitive to a wide range of key decadal change variables. These accuracy levels are determined both by the projected decadal changes as well as by the background natural variability that such signals must be detected against. The accuracy for decadal change traceability to SI standards includes uncertainties of calibration, sampling, and analysis methods. Unlike most other missions, all of the CLARREO requirements are judged not by instantaneous accuracy, but instead by accuracy in large time/space scale average decadal changes. Given the focus on decadal climate change, the NRC Decadal Survey concluded that the single most critical issue for decadal change observations was their lack of accuracy and low confidence in
Computational methods for nuclear criticality safety analysis
International Nuclear Information System (INIS)
Nuclear criticality safety analyses require the utilization of methods which have been tested and verified against benchmarks results. In this work, criticality calculations based on the KENO-IV and MCNP codes are studied aiming the qualification of these methods at the IPEN-CNEN/SP and COPESP. The utilization of variance reduction techniques is important to reduce the computer execution time, and several of them are analysed. As practical example of the above methods, a criticality safety analysis for the storage tubes for irradiated fuel elements from the IEA-R1 research has been carried out. This analysis showed that the MCNP code is more adequate for problems with complex geometries, and the KENO-IV code shows conservative results when it is not used the generalized geometry option. (author)
Institute of Scientific and Technical Information of China (English)
Wang Li-Guo; Shen Chao; Zheng Hou-Zhi; Zhu Hui; Zhao Jian-Hua
2011-01-01
This paper describes an n-i-p-i-n model heterostructure with a manganese (Mn)-doped p-type base region to check the stability of a positively charged manganese A+Mn centre with two holes weakly bound by a negatively charged 3d5(Mn) core of a local spin S =5/2 in the framework of the effective mass approximation near the Γ critical point (k ～ 0).By including the carrier screening effect,the ground state energy and the binding energy of the second hole in the positively charged centre A+Mn are calculated within a hole concentration range from 1 × 1016 cm-3 to 1 × 1017 cm-3,which is achievable by biasing the structure under photo-excitation.For comparison,the ground-state energy of a single hole in the neutral A0Mn centre is calculated in the same concentration range.It turns out that the binding energy of the second hole in the A+Mn centre varies from 9.27 meV to 4.57 meV.We propose that the presence of the A+Mn centre can be examined by measuring the photoluminescence from recombination of electrons in the conduction band with the bound holes in the A+Mn centre since a high frequency dielectric constant of ε∞ =10.66 can be safely adopted in this case.The novel feature of the ability to tune the impurity level of the A+Mn centre makes it attractive for optically and electrically manipulating local magnetic spins in semiconductors.
Criticality safety analysis for mockup facility
International Nuclear Information System (INIS)
Benchmark calculations for SCALE4.4 CSAS6 module have been performed for 31 UO2 fuel, 15MOX fuel and 10 metal material criticality experiments and then calculation biases of the SCALE 4.4 CSAS6 module have been revealed to be 0.00982, 0.00579 and 0.02347, respectively. When CSAS6 is applied to the criticality safety analysis for the mockup facility in which several kinds of nuclear material components are included, the calculation bias of CSAS6 is conservatively taken to be 0.02347. With the aid of this benchmarked code system, criticality safety analyses for the mockup facility at normal and hypothetical accidental conditions have been carried out. It appears that the maximum Keff is 0.28356 well below than the critical limit, Keff=0.95 at normal condition. In a hypothetical accidental condition, the maximum Keff is found to be 0.73527 much lower than the subcritical limit. For another hypothetical accidental condition the nuclear material leaks out of container and spread or lump in the floor, it was assumed that the nuclear material is shaped into a slab and water exists in the empty space of the nuclear material. Keff has been calculated as function of slab thickness and the volume ratio of water to nuclear material. The result shows that the Keff increases as the water volume ratio increases. It is also revealed that the Keff reaches to the maximum value when water if filled in the empty space of nuclear material. The maximum Keff value is 0.93960 lower than the subcritical limit
Orifici, Adrian C.; Krueger, Ronald
2010-01-01
With capabilities for simulating delamination growth in composite materials becoming available, the need for benchmarking and assessing these capabilities is critical. In this study, benchmark analyses were performed to assess the delamination propagation simulation capabilities of the VCCT implementations in Marc TM and MD NastranTM. Benchmark delamination growth results for Double Cantilever Beam, Single Leg Bending and End Notched Flexure specimens were generated using a numerical approach. This numerical approach was developed previously, and involves comparing results from a series of analyses at different delamination lengths to a single analysis with automatic crack propagation. Specimens were analyzed with three-dimensional and two-dimensional models, and compared with previous analyses using Abaqus . The results demonstrated that the VCCT implementation in Marc TM and MD Nastran(TradeMark) was capable of accurately replicating the benchmark delamination growth results and that the use of the numerical benchmarks offers advantages over benchmarking using experimental and analytical results.
International Nuclear Information System (INIS)
The computer program INDAR enables detailed estimates to be made of critical group radiation exposure arising from routine discharges of radioactivity for coastal sites where the discharge is close to the shore and the shoreline is reasonably straight, and for estuarine sites where radioactivity is rapidly mixed across the width of the estuary. Important processes which can be taken into account include the turbulence generated by the discharge, the effects of a sloping sea bed and the variation with time of the lateral dispersion coefficient. The significance of the timing of discharges can also be assessed. INDAR uses physically meaningful hydrographic parameters directly. For most sites the most important exposure pathways are seafood consumption, external exposure over estuarine sediments and beaches, and the handling of fishing gear. As well as for these primary pathways, INDAR enables direct calculations to be made for some additional exposure pathways. The secondary pathways considered are seaweed consumption, swimming, the handling of materials other than fishing gear and the inhalation of activity. (author)
Benchmark experiment on vanadium assembly with D-T neutrons. In-situ measurement
Energy Technology Data Exchange (ETDEWEB)
Maekawa, Fujio; Kasugai, Yoshimi; Konno, Chikara; Wada, Masayuki; Oyama, Yukio; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Murata, Isao; Kokooo; Takahashi, Akito
1998-03-01
Fusion neutronics benchmark experimental data on vanadium were obtained for neutrons in almost entire energies as well as secondary gamma-rays. Benchmark calculations for the experiment were performed to investigate validity of recent nuclear data files, i.e., JENDL Fusion File, FENDL/E-1.0 and EFF-3. (author)
Benchmark simulation models, quo vadis?
Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D
2013-01-01
As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.
Computed results on the IAEA benchmark problems at JAERI
International Nuclear Information System (INIS)
The outline of the computer code system of JAERI for analysing research reactors is presented and the results of check calculations to validate the code system are evaluated by the experimental data. Using this computer code system, some of the IAEA benchmark problems are solved and the results are compared with those of ANL. (author)
VENUS-2 Benchmark Problem Analysis with HELIOS-1.9
Energy Technology Data Exchange (ETDEWEB)
Jeong, Hyeon-Jun; Choe, Jiwon; Lee, Deokjung [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)
2014-10-15
Since there are reliable results of benchmark data from the OECD/NEA report of the VENUS-2 MOX benchmark problem, by comparing benchmark results users can identify the credibility of code. In this paper, the solution of the VENUS-2 benchmark problem from HELIOS 1.9 using the ENDF/B-VI library(NJOY91.13) is compared with the result from HELIOS 1.7 with consideration of the MCNP-4B result as reference data. The comparison contains the results of pin cell calculation, assembly calculation, and core calculation. The eigenvalues from those are considered by comparing the results from other codes. In the case of UOX and MOX assemblies, the differences from the MCNP-4B results are about 10 pcm. However, there is some inaccuracy in baffle-reflector condition, and relatively large differences were found in the MOX-reflector assembly and core calculation. Although HELIOS 1.9 utilizes an inflow transport correction, it seems that it has a limited effect on the error in baffle-reflector condition.
Evaluation of Saxton critical experiments
Energy Technology Data Exchange (ETDEWEB)
Joo, Hyung Kook; Noh, Jae Man; Jung, Hyung Guk; Kim, Young Il; Kim, Young Jin [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1997-12-31
As a part of International Criticality Safety Benchmark Evaluation Project (ICSBEP), SAXTON critical experiments were reevaluated. The effects of k{sub eff} of the uncertainties in experiment parameters, fuel rod characterization, soluble boron, critical water level, core structure, {sup 241}Am and {sup 241}Pu isotope number densities, random pitch error, duplicated experiment, axial fuel position, model simplification, etc., were evaluated and added in benchmark-model k{sub eff}. In addition to detailed model, the simplified model for Saxton critical experiments was constructed by omitting the top, middle, and bottom grids and ignoring the fuel above water. 6 refs., 1 fig., 3 tabs. (Author)
IRPhEP-handbook, International Handbook of Evaluated Reactor Physics Benchmark Experiments
International Nuclear Information System (INIS)
1 - Description: The purpose of the International Reactor Physics Experiment Evaluation Project (IRPhEP) is to provide an extensively peer-reviewed set of reactor physics-related integral data that can be used by reactor designers and safety analysts to validate the analytical tools used to design next-generation reactors and establish the safety basis for operation of these reactors. This work of the IRPhEP is formally documented in the 'International Handbook of Evaluated Reactor Physics Benchmark Experiments,' a single source of verified and extensively peer-reviewed reactor physics benchmark measurements data. The IRPhE Handbook is available on DVD. You may request a DVD by completing the DVD Request Form available at: http://irphep.inl.gov/handbook/hbrequest.shtml The evaluation process entails the following steps: 1. Identify a comprehensive set of reactor physics experimental measurements data, 2. Evaluate the data and quantify overall uncertainties through various types of sensitivity analysis to the extent possible, verify the data by reviewing original and subsequently revised documentation, and by talking with the experimenters or individuals who are familiar with the experimental facility, 3. Compile the data into a standardized format, 4. Perform calculations of each experiment with standard reactor physics codes where it would add information, 5. Formally document the work into a single source of verified and peer reviewed reactor physics benchmark measurements data. The International Handbook of Evaluated Reactor Physics Benchmark Experiments contains reactor physics benchmark specifications that have been derived from experiments that were performed at various nuclear experimental facilities around the world. The benchmark specifications are intended for use by reactor physics personal to validate calculational techniques. The 2008 Edition of the International Handbook of Evaluated Reactor Physics Experiments contains data from 25 different
On international criticality codes for fuel pellets in fissile solution
International Nuclear Information System (INIS)
The reference calculations, based on the APOLLO-Pic method implemented in the framework of this study, demonstrated that the actual reactivity variation (benchmark n0 20) is a monotonic decrease with pellet dissolution. At the opposite of the contributor's results, based on the international criticality code SCALE, the reactivity loss with dissolution is weak. The discrepancy is mainly due to 238U resonant absorption which can induce, in this fuel double heterogeneity problem n0 20, as much as -30 000 pcm K∞ underestimation. It was pointed out that design-oriented transport codes must be improved by accurate deterministic formalisms: PIC equivalence method, subgroup theory (WIMSE), ultrafine slowing-down calculation (ROLAIDS). Ultimate confirmation of the reference results presented in this paper should be provided by a set of critical experiments which mock-up hypothetical dissolver geometries. Finally it should be noted that thanks to the interest and the efforts of the OECD/NEA Criticality Working Group in performing the international benchmark exercise and in pursuing the explanation of the discrepancies, a potentially dangerous inadequacy in criticality calculation methods was exposed and resolved
Performance Targets and External Benchmarking
DEFF Research Database (Denmark)
Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.
Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... of the ‘inside’ costs of the sub-component, technical specifications of the product, opportunistic behavior from the suppliers and cognitive limitation. These are all aspects that easily can dismantle the market mechanism and make it counter-productive in the organization. Thus, by directing more attention...... as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...
The European Union benchmarking experience. From euphoria to fatigue?
Directory of Open Access Journals (Sweden)
Michael Zängle
2004-06-01
Full Text Available Even if one may agree with the possible criticism of the Lisbon process as being too vague in com-mitment or as lacking appropriate statistical techniques and indicators, the benchmarking system pro-vided by EUROSTAT seems to be sufficiently effective in warning against imminent failure. The Lisbon objectives are very demanding. This holds true even if each of the objectives is looked at in isolation. But 'Lisbon' is more demanding than that, requiring a combination of several objectives to be achieved simultaneously (GDP growth, labour productivity, job-content of growth, higher quality of jobs and greater social cohesion. Even to countries like Ireland, showing exceptionally high performance in GDP growth and employment promotion during the period under investigation, achieving potentially conflicting objectives simultaneously seems to be beyond feasibility. The European Union benchmark-ing exercise is embedded in the context of the Open Method(s of Co-ordination (OMC. This context makes the benchmarking approach part and parcel of an overarching philosophy, which relates the benchmarking indicators to each other and assigns to them their role in corroborating the increasingly dominating project of the 'embedded neo-liberalism'. Against this background, the present paper is focussed on the following point. With the EU bench-marking system being effective enough to make the imminent under-achievement visible, there is a danger of disillusionment and 'benchmarking fatigue', which may provoke an ideological crisis. The dominant project being so deeply rooted, however, chances are high that this crisis will be solved im-manently in terms of embedded neo-liberalism by strengthening the neo-liberal branch of the Euro-pean project. Confining itself to the Europe of Fifteen, the analysis draws on EUROSTAT's database of Structural Indicators. ...
Perceptual hashing algorithms benchmark suite
Institute of Scientific and Technical Information of China (English)
Zhang Hui; Schmucker Martin; Niu Xiamu
2007-01-01
Numerous perceptual hashing algorithms have been developed for identification and verification of multimedia objects in recent years. Many application schemes have been adopted for various commercial objects. Developers and users are looking for a benchmark tool to compare and evaluate their current algorithms or technologies. In this paper, a novel benchmark platform is presented. PHABS provides an open framework and lets its users define their own test strategy, perform tests, collect and analyze test data. With PHABS, various performance parameters of algorithms can be tested, and different algorithms or algorithms with different parameters can be evaluated and compared easily.
Energy Technology Data Exchange (ETDEWEB)
Suter, G.W. II [Oak Ridge National Lab., TN (United States); Mabrey, J.B. [University of West Florida, Pensacola, FL (United States)
1994-07-01
This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility.
Simple mathematical law benchmarks human confrontations
Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto
2013-12-01
Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.
International Nuclear Information System (INIS)
A good model on experimental data (criticality, control rod worth, and fuel element worth distributions) is encouraged to provide from the Musashi-TRIGA Mark 2 reactor. In the previous paper, as the keff values for different fuel loading patterns had been provided ranging from the minimum core to the full one, the data would be candidate for an ICSBEP evaluation. Evaluation of the control rod worth and fuel element worth distributions presented in this paper could be an excellent benchmark data applicable for validation of calculation technique used in the field of modern research reactor. As a result of simulation on the TRIGA-2 benchmark experiment, which was performed by three-dimensional continuous-energy Monte Carlo code (MCNP4A), it was found that the MCNP calculated values of control rod worth were consisted to the experimental data for both rod-drop and period methods. And for the fuel and the graphite element worth distributions, the MCNP calculated values agreed well with the measured ones though consideration of real control rod positions was needed for calculating fuel element reactivity positioned in inner ring. (G.K.)
How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction
Pappenberger, F.; Ramos, M. H.; Cloke, H. L.; Wetterhall, F.; Alfieri, L.; Bogner, K.; Mueller, A.; Salamon, P.
2015-03-01
The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are 'toughest to beat' and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all
Energy Technology Data Exchange (ETDEWEB)
Bowman, S.M.
1993-01-01
The requirements of ANSI/ANS 8.1 specify that calculational methods for away-from-reactor (AFR) criticality safety analyses be validated against experimental measurements. If credit for the negative reactivity of the depleted (or spent) fuel isotopics is desired, it is necessary to benchmark computational methods against spent fuel critical configurations. This report summarizes a portion of the ongoing effort to benchmark AFR criticality analysis methods using selected critical configurations from commercial pressurized-water reactors (PWR). The analysis methodology selected for all calculations reported herein was the codes and data provided in the SCALE-4 code system. The isotopic densities for the spent fuel assemblies in the critical configurations were calculated using the SAS2H analytical sequence of the SCALE-4 system. The sources of data and the procedures for deriving SAS2H input parameters are described in detail. The SNIKR code module was used to extract the necessary isotopic densities from the SAS2H results and to provide the data in the format required by the SCALE criticality analysis modules. The CSASN analytical sequence in SCALE-4 was used to perform resonance processing of the cross sections. The KENO V.a module of SCALE-4 was used to calculate the effective multiplication factor (k{sub eff}) of each case. The SCALE-4 27-group burnup library containing ENDF/B-IV (actinides) and ENDF/B-V (fission products) data was used for all the calculations. This volume of the report documents the SCALE system analysis of one reactor critical configuration for North Anna Unit 1 Cycle 5. This unit and cycle were chosen for a previous analysis using a different methodology because detailed isotopics from multidimensional reactor calculations were available from the Virginia Power Company. These data permitted comparison of criticality calculations directly using the utility-calculated isotopics to those using the isotopics generated by the SCALE-4 SAS2H
Nominal GDP: Target or Benchmark?
Hetzel, Robert L.
2015-01-01
Some observers have argued that the Federal Reserve would best fulfill its mandate by adopting a target for nominal gross domestic product (GDP). Insights from the monetarist tradition suggest that nominal GDP targeting could be destabilizing. However, adopting benchmarks for both nominal and real GDP could offer useful information about when monetary policy is too tight or too loose.
PRISMATIC CORE COUPLED TRANSIENT BENCHMARK
Energy Technology Data Exchange (ETDEWEB)
J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov
2011-06-01
The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.
Benchmarking biodiversity performances of farmers
Snoo, de G.R.; Lokhorst, A.M.; Dijk, van J.; Staats, H.; Musters, C.J.M.
2010-01-01
Farmers are the key players when it comes to the enhancement of farmland biodiversity. In this study, a benchmark system that focuses on improving farmers’ nature conservation was developed and tested among Dutch arable farmers in different social settings. The results show that especially tailored
Benchmarked Library Websites Comparative Study
Ramli, Rindra M.
2015-01-01
This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.
Benchmarking Universiteitsvastgoed: Managementinformatie bij vastgoedbeslissingen
Den Heijer, A.C.; De Vries, J.C.
2004-01-01
Voor u ligt het eindrapport van het onderzoek "Benchmarking universiteitsvastgoed". Dit rapport is de samenvoeging van twee deel producten: het theorierapport (verschenen in december 2003) en het praktijkrapport (verschenen in januari 2004). Onderwerpen in het theoriedeel zijn de analyse van andere
Recent calculational work at Department of Reactor Physics CTH
International Nuclear Information System (INIS)
Short presentations are given on various ANISN calculations, on studies of the Xe reactivity effect, on some Monte Carlo calculations, and on benchmark calculations of eigenvalues to the Boltzmann transport equation. (author)
Energy Technology Data Exchange (ETDEWEB)
Sogn, T.A.; Stuanes, A.O.; Abrahamsen, G.
1996-01-01
The conference paper deals with the accumulation of nitrogen in forests in Norway. The level of accumulation is a critical factor for the calculation of load limits. The paper compares the average rapidity values of accumulation since the last glacial age with the calculated values from the more short-lasting period based on data from surveying programs of the State Pollution Control Authority, manuring experiments, and other relevant research programs in this field. 8 refs., 1 fig., 1 tab.
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...
Calculation of benchmarks with a shear beam model
Ferreira, D.
2015-01-01
Fiber models for beam and shell elements allow for relatively rapid finite element analysis of concrete structures and structural elements. This project aims at the development of the formulation of such elements and a pilot implementation. Standard nonlinear fiber beam formulations do not account
Critical experiments analyses by using 70 energy group library based on ENDF/B-VI
Energy Technology Data Exchange (ETDEWEB)
Tahara, Yoshihisa; Matsumoto, Hideki [Mitsubishi Heavy Industries Ltd., Yokohama (Japan). Nuclear Energy Systems Engineering Center; Huria, H.C.; Ouisloumen, M.
1998-03-01
The newly developed 70-group library has been validated by comparing kinf from a continuous energy Monte-Carlo code MCNP and two dimensional spectrum calculation code PHOENIX-CP. The code employs Discrete Angular Flux Method based on Collision Probability. The library has been also validated against a large number of critical experiments and numerical benchmarks for assemblies with MOX and Gd fuels. (author)
Data Assimilation of Benchmark Experiments for Homogenous Thermal / Epithermal Uranium Systems
International Nuclear Information System (INIS)
This presentation reports on the data assimilation of benchmark experiments for homogeneous thermal and epithermal uranium systems. The assimilation method is based on Kalman filters using integral parameters and sensitivity coefficients calculated with MONK9 and ENDF/B-VII data. The assimilation process results in an overall improvement of the calculation-benchmark agreement, and may help in the selection of nuclear data after analysis of adjustment trends
Thermal-Hydraulic Analysis of OECD Benchmark Problem for PBMR 400 Using MARS-GCR
Energy Technology Data Exchange (ETDEWEB)
Lee, Seung Wook; Jeong, Jae Jun; Lee, Won Jae [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
2006-07-01
The OECD benchmark problem for the PBMR 400 aims to test the existing methods for HTGRs but also develop the more accurate and efficient tools to analyse the neutronics and thermal-hydraulic behaviour for the design and safety evaluations of the PBMR. In addition, it includes defining appropriate benchmarks to verify and validate the new methods in computer codes. The benchmark procedure is divided into two parts; 1) phase I, which includes the stand-alone steady state calculations (neutronics and thermal-hydraulics) and coupled steady state calculation, 2) phase II, which includes various transient calculations. Till now, standalone calculations for neutronics and thermal-hydraulics were performed with given cross-section and power density data, respectively. This paper includes the standalone thermal-hydraulic calculation results of MARSGCR with a given power density. Although a preliminary steady state calculation coupled with MASTER was also performed, the calculation results will be released later.
Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners
Directory of Open Access Journals (Sweden)
Luštický Martin
2012-03-01
Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.
New Improved Nuclear Data for Nuclear Criticality and Safety
International Nuclear Information System (INIS)
The Geel Electron Linear Accelerator (GELINA) was used to measure neutron total and capture cross sections of 182,183,184,186W and 63,65Cu in the energy range from 100 eV to ∼200 keV using the time-of-flight method. GELINA is the only high-power white neutron source with excellent timing resolution and ideally suited for these experiments. Concerns about the use of existing cross-section data in nuclear criticality calculations using Monte Carlo codes and benchmarks were a prime motivator for the new cross-section measurements. To support the Nuclear Criticality Safety Program, neutron cross-section measurements were initiated using GELINA at the EC-JRC-IRMM. Concerns about data deficiencies in some existing cross-section evaluations from libraries such as ENDF/B, JEFF, or JENDL for nuclear criticality calculations were the prime motivator for new cross-section measurements. Over the past years many troubles with existing nuclear data have emerged, such as problems related to proper normalization, neutron sensitivity backgrounds, poorly characterized samples, and use of improper pulse-height weighting functions. These deficiencies may occur in the resolved- and unresolved-resonance region and may lead to erroneous nuclear criticality calculations. An example is the use of the evaluated neutron cross-section data for tungsten in nuclear criticality safety calculations, which exhibit discrepancies in benchmark calculations and show the need for reliable covariance data. We measured the neutron total and capture cross sections of 182,183,184,186W and 63,65Cu in the neutron energy range from 100 eV to several hundred keV. This will help to improve the representation of the cross sections since most of the available evaluated data rely only on old measurements. Usually these measurements were done with poor experimental resolution or only over a very limited energy range, which is insufficient for the current application.
The LDBC Social Network Benchmark: Interactive Workload
Erling, O.; Averbuch, A.; Larriba-Pey, J.; Chafi, H.; Gubichev, A.; Prat, A.; Pham, M.D.; Boncz, P.A.
2015-01-01
The Linked Data Benchmark Council (LDBC) is now two years underway and has gathered strong industrial participation for its mission to establish benchmarks, and benchmarking practices for evaluating graph data management systems. The LDBC introduced a new choke-point driven methodology for developin
BFS, a Legacy to the International Reactor Physics, Criticality Safety, and Nuclear Data Communities
Energy Technology Data Exchange (ETDEWEB)
J. Blair Briggs; Anatoly Tsibulya; Yevgeniy Rozhikhin
2012-03-01
Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. Two Organization for Economic Cooperation and Development (OECD) Nuclear Energy Agency (NEA) activities, the International Criticality Safety Benchmark Evaluation Project (ICSBEP), initiated in 1992, and the International Reactor Physics Experiment Evaluation Project (IRPhEP), initiated in 2003, have been identifying existing integral experiment data, evaluating those data, and providing integral benchmark specifications for methods and data validation for nearly two decades. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. Data provided by these two projects will be of use to the international reactor physics, criticality safety, and nuclear data communities for future decades The Russian Federation has been a major contributor to both projects with the Institute of Physics and Power Engineering (IPPE) as the major contributor from the Russian Federation. Included in the benchmark specifications from the BFS facilities are 34 critical configurations from BFS-49, 61, 62, 73, 79, 81, 97, 99, and 101; spectral characteristics measurements from BFS-31, 42, 57, 59, 61, 62, 73, 97, 99, and 101; reactivity effects measurements from BFS-62-3A; reactivity coefficients and kinetics measurements from BFS-73; and reaction rate measurements from BFS-42, 61, 62, 73, 97, 99, and 101.
International Nuclear Information System (INIS)
Reviewed is the effect of heat flux of different system parameters on critical density in order to give an initial view on the value of several parameters. A thorough analysis of different equations is carried out to calculate burnout is steam-water flows in uniformly heated tubes, annular, and rectangular channels and rod bundles. Effect of heat flux density distribution and flux twisting on burnout and storage determination according to burnout are commended
Development of solutions to benchmark piping problems
Energy Technology Data Exchange (ETDEWEB)
Reich, M; Chang, T Y; Prachuktam, S; Hartzman, M
1977-12-01
Benchmark problems and their solutions are presented. The problems consist in calculating the static and dynamic response of selected piping structures subjected to a variety of loading conditions. The structures range from simple pipe geometries to a representative full scale primary nuclear piping system, which includes the various components and their supports. These structures are assumed to behave in a linear elastic fashion only, i.e., they experience small deformations and small displacements with no existing gaps, and remain elastic through their entire response. The solutions were obtained by using the program EPIPE, which is a modification of the widely available program SAP IV. A brief outline of the theoretical background of this program and its verification is also included.
Energy Technology Data Exchange (ETDEWEB)
Vargas E, S.; Esquivel E, J.; Ramirez S, J. R., E-mail: samuel.vargas@inin.gob.mx [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)
2013-10-15
The purpose of the concept of burned consideration (Burn-up credit) is determining the capacity of the calculation codes, as well as of the nuclear data associates to predict the isotopic composition and the corresponding neutrons effective multiplication factor in a generic container of spent fuel during some time of relevant storage. The present work has as objective determining this capacity of the calculation code MCNP in the prediction of the neutrons effective multiplication factor for a fuel assemblies arrangement type PWR inside a container of generic storage. The calculations are divided in two parts, the first, in the decay calculations with specified nuclide concentrations by the reference for a pressure water reactor (PWR) with enriched fuel to 4.5% and a discharge burned of 50 GW d/Mtu. The second, in criticality calculations with isotopic compositions dependent of the time for actinides and important fission products, taking 30 time steps, for two actinide groups and fission products. (Author)
Methodology for Benchmarking IPsec Gateways
Directory of Open Access Journals (Sweden)
Adam Tisovský
2012-08-01
Full Text Available The paper analyses forwarding performance of IPsec gateway over the rage of offered loads. It focuses on the forwarding rate and packet loss particularly at the gateway’s performance peak and at the state of gateway’s overload. It explains possible performance degradation when the gateway is overloaded by excessive offered load. The paper further evaluates different approaches for obtaining forwarding performance parameters – a widely used throughput described in RFC 1242, maximum forwarding rate with zero packet loss and us proposed equilibrium throughput. According to our observations equilibrium throughput might be the most universal parameter for benchmarking security gateways as the others may be dependent on the duration of test trials. Employing equilibrium throughput would also greatly shorten the time required for benchmarking. Lastly, the paper presents methodology and a hybrid step/binary search algorithm for obtaining value of equilibrium throughput.
Nupec BWR full-size fine-mesh bundle test (BFBT) benchmark
International Nuclear Information System (INIS)
Refined models for best-estimate calculations based on good-quality experimental data can improve the understanding of phenomena and the quantification of margins for operating nuclear power reactors. According to experts, refinements should not be limited to currently available macroscopic approaches but should be extended to next-generation approaches that focus on more microscopic processes. Multi-scale/multi-physics approaches are the way forward in this respect. This report describes the specification of an international benchmark based on high-quality fine mesh data, released through the government of Japan and the Nuclear Power Engineering Corporation (NUPEC), with the aim of advancing the insufficiently developed field of two-phase flow theory. It has been designed for systematically assessing and comparing different numerical models used for predicting detailed void distributions and critical powers. Additional volumes concerning this benchmark are planned and are intended to show to what extent the most recent approaches are capable of predicting two-phase flow phenomena. (author)
Restaurant Energy Use Benchmarking Guideline
Energy Technology Data Exchange (ETDEWEB)
Hedrick, R.; Smith, V.; Field, K.
2011-07-01
A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.
TRIGA Mark II benchmark experiment
International Nuclear Information System (INIS)
Experimental results of pulse parameters and control rod worth measurements at TRIGA Mark 2 reactor in Ljubljana are presented. The measurements were performed with a completely fresh, uniform, and compact core. Only standard fuel elements with 12 wt% uranium were used. Special efforts were made to get reliable and accurate results at well-defined experimental conditions, and it is proposed to use the results as a benchmark test case for TRIGA reactors
Thermal Performance Benchmarking: Annual Report
Energy Technology Data Exchange (ETDEWEB)
Moreno, Gilbert
2016-04-08
The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.
SCALE-4 analysis of pressurized water reactor critical configurations. Volume 1: Summary
International Nuclear Information System (INIS)
The requirements of ANSI/ANS 8.1 specify that calculational methods for away-from-reactor criticality safety analyses be validated against experimental measurements. If credit is to be taken for the reduced reactivity of burned or spent fuel relative to its original fresh composition, it is necessary to benchmark computational methods used in determining such reactivity worth against spent fuel reactivity measurements. This report summarizes a portion of the ongoing effort to benchmark away-from-reactor criticality analysis methods using critical configurations from commercial pressurized water reactors (PWR). The analysis methodology utilized for all calculations in this report is based on the modules and data associated with the SCALE-4 code system. Each of the five volumes comprising this report provides an overview of the methodology applied. Subsequent volumes also describe in detail the approach taken in performing criticality calculations for these PWR configurations: Volume 2 describes criticality calculations for the Tennessee Valley Authority's Sequoyah Unit 2 reactor for Cycle 3; Volume 3 documents the analysis of Virginia Power's Surry Unit 1 reactor for the Cycle 2 core; Volume 4 documents the calculations performed based on GPU Nuclear Corporation's Three Mile Island Unit 1 Cycle 5 core; and, lastly, Volume 5 describes the analysis of Virginia Power's North Anna Unit 1 Cycle 5 core. Each of the reactor-specific volumes provides the details of calculations performed to determine the effective multiplication factor for each reactor core for one or more critical configurations using the SCALE-4 system; these results are summarized in this volume. Differences between the core designs and their possible impact on the criticality calculations are also discussed. Finally, results are presented for additional analyses performed to verify that solutions were sufficiently converged
Emhjellen, Kjetil
1997-01-01
Since the first publication on benchmarking in 1989 by Robert C. Camp of “Benchmarking: The search for Industry Best Practices that Lead to Superior Performance”, the improvement technique benchmarking has been established as an important tool in the process focused manufacturing or production environment. The use of benchmarking has expanded to other types of industry. Benchmarking has past the doorstep and is now in early trials in the project and construction environment....
HS06 Benchmark for an ARM Server
Kluth, Stefan
2014-06-01
We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.
HS06 Benchmark for an ARM Server
Kluth, Stefan
2013-01-01
We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.
KENO V.a Primer: A Primer for Criticality Calculations with SCALE/KENO V.a Using CSPAN for Input
Energy Technology Data Exchange (ETDEWEB)
Busch, R.D.
2003-01-17
The SCALE (Standardized Computer Analyses for Licensing Evaluation) computer software system developed at Oak Ridge National Laboratory (ORNL) is widely used and accepted around the world for criticality safety analyses. The well-known KENO V.a three-dimensional Monte Carlo criticality computer code is the primary criticality safety analysis tool in SCALE. The KENO V.a primer is designed to help a new user understand and use the SCALE/KENO V.a Monte Carlo code for nuclear criticality safety analyses. It assumes that the user has a college education in a technical field. There is no assumption of familiarity with Monte Carlo codes in general or with SCALE/KENO V.a in particular. The primer is designed to teach by example, with each example illustrating two or three features of SCALE/KENO V.a that are useful in criticality analyses. The primer is based on SCALE 4.4a, which includes the Criticality Safety Processor for Analysis (CSPAN) input processor for Windows personal computers (PCs). A second edition of the primer, which uses the new KENO Visual Editor, is currently under development at ORNL and is planned for publication in late 2003. Each example in this first edition of the primer uses CSPAN to provide the framework for data input. Starting with a Quickstart section, the primer gives an overview of the basic requirements for SCALE/KENO V.a input and allows the user to quickly run a simple criticality problem with SCALE/KENO V.a. The sections that follow Quickstart include a list of basic objectives at the beginning that identifies the goal of the section and the individual SCALE/KENO V.a features which are covered in detail in the example problems in that section. Upon completion of the primer, a new user should be comfortable using CSPAN to set up criticality problems in SCALE/KENO V.a.
Argonne Code Center: Benchmark problem book.
Energy Technology Data Exchange (ETDEWEB)
None, None
1977-06-01
This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.
A proposed benchmark problem for cargo nuclear threat monitoring
Wesley Holmes, Thomas; Calderon, Adan; Peeples, Cody R.; Gardner, Robin P.
2011-10-01
There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, [1]). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.×4 in.×16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.×16 in. side facing the system. The two sources used in the benchmark are 137Cs and 235U.