Comparison between HELIOS calculations and a PWR cell benchmark for actinides transmutation
Energy Technology Data Exchange (ETDEWEB)
Guzman, Rafael [Facultad de Ingenieria, Universidad Nacional Autonoma de Mexico, Paseo Cuauhnahuac 8532, 62550 Jiutepec, Mor. (Mexico); Francois, Juan-Luis [Facultad de Ingenieria, Universidad Nacional Autonoma de Mexico, Paseo Cuauhnahuac 8532, 62550 Jiutepec, Mor. (Mexico)]. E-mail: jlfl@fi-b.unam.mx
2007-01-15
This paper shows a comparison between the results obtained with the HELIOS code and other similar codes used in the international community, with respect to the transmutation of actinides. To do this, the international benchmark: 'Calculations of Different Transmutation Concepts' of the Nuclear Energy Agency is analyzed. In this benchmark, two types of cells are analyzed: a small cell corresponding to a standard pressurized water reactor (PWR), and a wide cell corresponding to a highly moderated PWR. Two types of discharge burnup are considered: 33 GWd/tHM and 50 GWd/tHM. The following results are analyzed: the neutron multiplication factor as a function of burnup, the atomic density of the principal actinide isotopes, the radioactivity of selected actinides at reactor shutdown and cooling times from 7 until 50,000 years, the void reactivity and the Doppler reactivity. The results are compared with the following codes: KAPROS/KARBUS (FZK, Germany), SRAC95 (JAERI, Japan), TRIFON (ITTEP, Russian Federation) and WIMS (IPPE, Russian Federation). For the neutron multiplication factor, the results obtained with HELIOS show a difference of around 1% {delta}k/k. For the isotopic concentrations: {sup 241}Pu, {sup 242}Pu, and {sup 242m}Am, the results of all the institutions present a difference that increases at higher burnup; for the case of {sup 237}Np, the results of FZK diverges from the other results as the burnup increases. Regarding the activity, the difference of the results is acceptable, except for the case of {sup 241}Pu. For the Doppler coefficient, the results are acceptable, except for the cells with high moderation. In the case of the void coefficient, the difference of the results increases at higher void fractions, being the highest at 95%. In summary, for the PWR benchmark, the results obtained with HELIOS agree reasonably well within the limits of the multiple plutonium recycling established by the NEA working party on plutonium fuels and
International Nuclear Information System (INIS)
Key, S.W.
1985-01-01
The results of two calculations related to the impact response of spent nuclear fuel shipping casks are compared to the benchmark results reported in a recent study by the Japan Society of Mechanical Engineers Subcommittee on Structural Analysis of Nuclear Shipping Casks. Two idealized impacts are considered. The first calculation utilizes a right circular cylinder of lead subjected to a 9.0 m free fall onto a rigid target, while the second calculation utilizes a stainless steel clad cylinder of lead subjected to the same impact conditions. For the first problem, four calculations from graphical results presented in the original study have been singled out for comparison with HONDO III. The results from DYNA3D, STEALTH, PISCES, and ABAQUS are reproduced. In the second problem, the results from four separate computer programs in the original study, ABAQUS, ANSYS, MARC, and PISCES, are used and compared with HONDO III. The current version of HONDO III contains a fully automated implementation of the explicit-explicit partitioning procedure for the central difference method time integration which results in a reduction of computational effort by a factor in excess of 5. The results reported here further support the conclusion of the original study that the explicit time integration schemes with automated time incrementation are effective and efficient techniques for computing the transient dynamic response of nuclear fuel shipping casks subject to impact loading. (orig.)
Benchmark Comparison for a Multi-Processing Ion Mobility Calculator in the Free Molecular Regime
Shrivastav, Vaibhav; Nahin, Minal; Hogan, Christopher J.; Larriba-Andaluz, Carlos
2017-08-01
A benchmark comparison between two ion mobility and collision cross-section (CCS) calculators, MOBCAL and IMoS, is presented here as a standard to test the efficiency and performance of both programs. Utilizing 47 organic ions, results are in excellent agreement between IMoS and MOBCAL in He and N2, when both programs use identical input parameters. Due to a more efficiently written algorithm and to its parallelization, IMoS is able to calculate the same CCS (within 1%) with a speed around two orders of magnitude faster than its MOBCAL counterpart when seven cores are used. Due to the high computational cost of MOBCAL in N2, reaching tens of thousands of seconds even for small ions, the comparison between IMoS and MOBCAL is stopped at 70 atoms. Large biomolecules (>10000 atoms) remain computationally expensive when IMoS is used in N2 (even when employing 16 cores). Approximations such as diffuse trajectory methods (DHSS, TDHSS) with and without partial charges and projected area approximation corrections can be used to reduce the total computational time by several folds without hurting the accuracy of the solution. These latter methods can in principle be used with coarse-grained model structures and should yield acceptable CCS results.
Calculational benchmark comparisons for a low sodium void worth actinide burner core design
International Nuclear Information System (INIS)
Hill, R.N.; Kawashima, M.; Arie, K.; Suzuki, M.
1992-01-01
Recently, a number of low void worth core designs with non-conventional core geometries have been proposed. Since these designs lack a good experimental and computational database, benchmark calculations are useful for the identification of possible biases in performance characteristics predictions. In this paper, a simplified benchmark model of a metal fueled, low void worth actinide burner design is detailed; and two independent neutronic performance evaluations are compared. Calculated performance characteristics are evaluated for three spatially uniform compositions (fresh uranium/plutonium, batch-averaged uranium/transuranic, and batch-averaged uranium/transuranic with fission products) and a regional depleted distribution obtained from a benchmark depletion calculation. For each core composition, the flooded and voided multiplication factor, power peaking factor, sodium void worth (and its components), flooded Doppler coefficient and control rod worth predictions are compared. In addition, the burnup swing, average discharge burnup, peak linear power, and fresh fuel enrichment are calculated for the depletion case. In general, remarkably good agreement is observed between the evaluations. The most significant difference is predicted performance characteristics is a 0.3--0.5% Δk/(kk) bias in the sodium void worth. Significant differences in the transmutation rate of higher actinides are also observed; however, these differences do not cause discrepancies in the performing predictions
RISKIND verification and benchmark comparisons
Energy Technology Data Exchange (ETDEWEB)
Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.
1997-08-01
This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.
RISKIND verification and benchmark comparisons
International Nuclear Information System (INIS)
Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.
1997-08-01
This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models
International Nuclear Information System (INIS)
Cho, Moon-Sung; Kim, Y. M.; Lee, Y. W.; Jeong, K. C.; Kim, Y. K.; Oh, S. C.
2006-01-01
The fundamental design for a gas-cooled reactor relies on an understanding of the behavior of a coated particle fuel. KAERI, which has been carrying out the Korean VHTR (Very High Temperature modular gas cooled Reactor) Project since 2004, is developing a fuel performance analysis code for a VHTR named COPA (COated Particle fuel Analysis). COPA predicts temperatures, stresses, a fission gas release and failure probabilities of a coated particle fuel in normal operating conditions. Validation of COPA in the process of its development is realized partly by participating in the benchmark section of the international CRP-6 program led by IAEA which provides comprehensive benchmark problems and analysis results obtained from the CRP-6 member countries. Apart from the validation effort through the CRP-6, a validation of COPA was attempted by comparing its benchmark results with the visco-elastic solutions obtained from the ABAQUS code calculations for the same CRP-6 TRISO coated particle benchmark problems involving creep, swelling, and pressure. The study shows the calculation results of the IAEA-CRP-6 benchmark cases 5 through 7 by using the ABAQUS FE model for a comparison with the COPA results
IAEA sodium void reactivity benchmark calculations
International Nuclear Information System (INIS)
Hill, R.N.; Finck, P.J.
1992-01-01
In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated
Energy Technology Data Exchange (ETDEWEB)
Fang, Zongtang; Both, Johan; Li, Shenggang; Yue, Shuwen; Aprà, Edoardo; Keçeli, Murat; Wagner, Albert F.; Dixon, David A.
2016-08-09
The heats of formation and the normalized clustering energies (NCEs) for the group 4 and group 6 transition metal oxide (TMO) trimers and tetramers have been calculated by the Feller-Peterson-Dixon (FPD) method. The heats of formation predicted by the FPD method do not differ much from those previously derived from the NCEs at the CCSD(T)/aT level except for the CrO3 nanoclusters. New and improved heats of formation for Cr3O9 and Cr4O12 were obtained using PW91 orbitals instead of Hartree-Fock (HF) orbitals. Diffuse functions are necessary to predict accurate heats of formation. The fluoride affinities (FAs) are calculated with the CCSD(T) method. The relative energies (REs) of different isomers, NCEs, electron affinities (EAs), and FAs of (MO2)n ( M = Ti, Zr, Hf, n = 1 – 4 ) and (MO3)n ( M = Cr, Mo, W, n = 1 – 3) clusters have been benchmarked with 55 exchange-correlation DFT functionals including both pure and hybrid types. The absolute errors of the DFT results are mostly less than ±10 kcal/mol for the NCEs and the EAs, and less than ±15 kcal/mol for the FAs. Hybrid functionals usually perform better than the pure functionals for the REs and NCEs. The performance of the two types of functionals in predicting EAs and FAs is comparable. The B1B95 and PBE1PBE functionals provide reliable energetic properties for most isomers. Long range corrected pure functionals usually give poor FAs. The standard deviation of the absolute error is always close to the mean errors and the probability distributions of the DFT errors are often not Gaussian (normal). The breadth of the distribution of errors and the maximum probability are dependent on the energy property and the isomer.
BENCHMARKING ORTEC ISOTOPIC MEASUREMENTS AND CALCULATIONS
Energy Technology Data Exchange (ETDEWEB)
Dewberry, R; Raymond Sigg, R; Vito Casella, V; Nitin Bhatt, N
2008-09-29
these cases the ISOTOPIC analysis program is especially valuable because it allows a rapid, defensible, reproducible analysis of radioactive content without tedious and repetitive experimental measurement of {gamma}-ray transmission through the sample and container at multiple photon energies. The ISOTOPIC analysis technique is also especially valuable in facility holdup measurements where the acquisition configuration does not fit the accepted generalized geometries where detector efficiencies have been solved exactly with good calculus. Generally in facility passive {gamma}-ray holdup measurements the acquisition geometry is only approximately reproducible, and the sample (object) is an extensive glovebox or HEPA filter component. In these cases accuracy of analyses is rarely possible, however demonstrating fissile Pu and U content within criticality safety guidelines yields valuable operating information. Demonstrating such content can be performed with broad assumptions and within broad factors (e.g. 2-8) of conservatism. The ISOTOPIC analysis program yields rapid defensible analyses of content within acceptable uncertainty and within acceptable conservatism without extensive repetitive experimental measurements. In addition to transmission correction determinations based on the mass and composition of objects, the ISOTOPIC program performs finite geometry corrections based on object shape and dimensions. These geometry corrections are based upon finite element summation to approximate exact closed form calculus. In this report we provide several benchmark comparisons to the same technique provided by the Canberra In Situ Object Counting System (ISOCS) and to the finite thickness calculations described by Russo in reference 10. This report describes the benchmark comparisons we have performed to demonstrate and to document that the ISOTOPIC analysis program yields the results we claim to our customers.
COVE 2A Benchmarking calculations using NORIA
International Nuclear Information System (INIS)
Carrigan, C.R.; Bixler, N.E.; Hopkins, P.L.; Eaton, R.R.
1991-10-01
Six steady-state and six transient benchmarking calculations have been performed, using the finite element code NORIA, to simulate one-dimensional infiltration into Yucca Mountain. These calculations were made to support the code verification (COVE 2A) activity for the Yucca Mountain Site Characterization Project. COVE 2A evaluates the usefulness of numerical codes for analyzing the hydrology of the potential Yucca Mountain site. Numerical solutions for all cases were found to be stable. As expected, the difficulties and computer-time requirements associated with obtaining solutions increased with infiltration rate. 10 refs., 128 figs., 5 tabs
Benchmark On Sensitivity Calculation (Phase III)
Energy Technology Data Exchange (ETDEWEB)
Ivanova, Tatiana [IRSN; Laville, Cedric [IRSN; Dyrda, James [Atomic Weapons Establishment; Mennerdahl, Dennis [E. Mennerdahl Systems; Golovko, Yury [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Raskach, Kirill [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Tsiboulia, Anatoly [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Lee, Gil Soo [Korea Institute of Nuclear Safety (KINS); Woo, Sweng-Woong [Korea Institute of Nuclear Safety (KINS); Bidaud, Adrien [Labratoire de Physique Subatomique et de Cosmolo-gie (LPSC); Patel, Amrit [NRC; Bledsoe, Keith C [ORNL; Rearden, Bradley T [ORNL; Gulliford, J. [OECD Nuclear Energy Agency
2012-01-01
The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.
Computational Chemistry Comparison and Benchmark Database
SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access) The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.
Benchmark calculation of subchannel analysis codes
International Nuclear Information System (INIS)
1996-02-01
In order to evaluate the analysis capabilities of various subchannel codes used in thermal-hydraulic design of light water reactors, benchmark calculations were performed. The selected benchmark problems and major findings obtained by the calculations were as follows: (1)As for single-phase flow mixing experiments between two channels, the calculated results of water temperature distribution along the flow direction were agreed with experimental results by tuning turbulent mixing coefficients properly. However, the effect of gap width observed in the experiments could not be predicted by the subchannel codes. (2)As for two-phase flow mixing experiments between two channels, in high water flow rate cases, the calculated distributions of air and water flows in each channel were well agreed with the experimental results. In low water flow cases, on the other hand, the air mixing rates were underestimated. (3)As for two-phase flow mixing experiments among multi-channels, the calculated mass velocities at channel exit under steady-state condition were agreed with experimental values within about 10%. However, the predictive errors of exit qualities were as high as 30%. (4)As for critical heat flux(CHF) experiments, two different results were obtained. A code indicated that the calculated CHF's using KfK or EPRI correlations were well agreed with the experimental results, while another code suggested that the CHF's were well predicted by using WSC-2 correlation or Weisman-Pei mechanistic model. (5)As for droplets entrainment and deposition experiments, it was indicated that the predictive capability was significantly increased by improving correlations. On the other hand, a remarkable discrepancy between codes was observed. That is, a code underestimated the droplet flow rate and overestimated the liquid film flow rate in high quality cases, while another code overestimated the droplet flow rate and underestimated the liquid film flow rate in low quality cases. (J.P.N.)
Benchmarking Ortec ISOTOPIC measurements and calculations
International Nuclear Information System (INIS)
This paper represents a description of eight compiled benchmark tests conducted to probe and to demonstrate the extensive utility of the Ortec ISOTOPIC gamma-ray analysis software program. The paper describes tests of the programs capability to perform finite geometry correction factors and sample-matrix-container photon absorption correction factors. Favorable results are obtained in all benchmark tests. (author)
Standard Guide for Benchmark Testing of Light Water Reactor Calculations
American Society for Testing and Materials. Philadelphia
2010-01-01
1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...
Danish calculations of the NEACRP pin-power benchmark
International Nuclear Information System (INIS)
Hoejerup, C.F.
1994-01-01
This report describes calculations performed for the NEACRP pin-power benchmark. The calculations are made with the code NEM2D, a diffusion theory code based on the nodal expansion method. (au) (15 tabs., 15 ills., 5 refs.)
Stationary PWR-calculations by means of LWRSIM at the NEACRP 3D-LWRCT benchmark
International Nuclear Information System (INIS)
Van de Wetering, T.F.H.
1993-01-01
Within the framework of participation in an international benchmark, calculations were executed by means of an adjusted version of the computer code Light Water Reactor SIMulation (LWRSIM) for three-dimensional reactor core calculations of pressurized water reactors. The 3-D LWR Core Transient Benchmark was set up aimed at the comparison of 3-D computer codes for transient calculations in LWRs. Participation in the benchmark provided more insight in the accuracy of the code when applied for other pressurized water reactors than applied for the nuclear power plant Borssele in the Netherlands, for which the code has been developed and used originally
International Nuclear Information System (INIS)
Grant, C.; Mollerach, R.; Leszczynski, F.; Serra, O.; Marconi, J.; Fink, J.
2006-01-01
In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure vessel design with 451 vertical coolant channels and the fuel assemblies (FA) are clusters of 37 natural UO 2 rods with an active length of 530 cm. For the reactor physics area, a revision and update of reactor physics calculation methods and models was recently carried out covering cell, supercell (control rod) and core calculations. This paper presents benchmark comparisons of core parameters of a slightly idealized model of the Atucha-I core obtained with the PUMA reactor code with MCNP5. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, more symmetric than Atucha-II, and has some experimental data available. To validate the new models benchmark comparisons of k-effective, channel power and axial power distributions obtained with PUMA and MCNP5 have been performed. In addition, a simple cell heterogeneity correction recently introduced in PUMA is presented, which improves significantly the agreement of calculated channel powers with MCNP5. To complete the validation, the calculation of some of the critical configurations of the Atucha-I reactor measured during the experiments performed at first criticality is also presented. (authors)
WIPP Benchmark calculations with the large strain SPECTROM codes
Energy Technology Data Exchange (ETDEWEB)
Callahan, G.D.; DeVries, K.L. [RE/SPEC, Inc., Rapid City, SD (United States)
1995-08-01
This report provides calculational results from the updated Lagrangian structural finite-element programs SPECTROM-32 and SPECTROM-333 for the purpose of qualifying these codes to perform analyses of structural situations in the Waste Isolation Pilot Plant (WIPP). Results are presented for the Second WIPP Benchmark (Benchmark II) Problems and for a simplified heated room problem used in a parallel design calculation study. The Benchmark II problems consist of an isothermal room problem and a heated room problem. The stratigraphy involves 27 distinct geologic layers including ten clay seams of which four are modeled as frictionless sliding interfaces. The analyses of the Benchmark II problems consider a 10-year simulation period. The evaluation of nine structural codes used in the Benchmark II problems shows that inclusion of finite-strain effects is not as significant as observed for the simplified heated room problem, and a variety of finite-strain and small-strain formulations produced similar results. The simplified heated room problem provides stratigraphic complexity equivalent to the Benchmark II problems but neglects sliding along the clay seams. The simplified heated problem does, however, provide a calculational check case where the small strain-formulation produced room closures about 20 percent greater than those obtained using finite-strain formulations. A discussion is given of each of the solved problems, and the computational results are compared with available published results. In general, the results of the two SPECTROM large strain codes compare favorably with results from other codes used to solve the problems.
Compilation report of VHTRC temperature coefficient benchmark calculations
International Nuclear Information System (INIS)
Yasuda, Hideshi; Yamane, Tsuyoshi
1995-11-01
A calculational benchmark problem has been proposed by JAERI to an IAEA Coordinated Research Program, 'Verification of Safety Related Neutronic Calculation for Low-enriched Gas-cooled Reactors' to investigate the accuracy of calculation results obtained by using codes of the participating countries. This benchmark is made on the basis of assembly heating experiments at a pin-in block type critical assembly, VHTRC. Requested calculation items are the cell parameters, effective multiplication factor, temperature coefficient of reactivity, reaction rates, fission rate distribution, etc. Seven institutions from five countries have joined the benchmark works. Calculation results are summarized in this report with some remarks by the authors. Each institute analyzed the problem by applying the calculation code system which was prepared for the HTGR development of individual country. The values of the most important parameter, k eff , by all institutes showed good agreement with each other and with the experimental ones within 1%. The temperature coefficient agreed within 13%. The values of several cell parameters calculated by several institutes did not agree with the other's ones. It will be necessary to check the calculation conditions again for getting better agreement. (J.P.N.)
Benchmark Calculations of Noncovalent Interactions of Halogenated Molecules
Czech Academy of Sciences Publication Activity Database
Řezáč, Jan; Riley, Kevin Eugene; Hobza, Pavel
2012-01-01
Roč. 8, č. 11 (2012), s. 4285-4292 ISSN 1549-9618 R&D Projects: GA ČR GBP208/12/G016 Institutional support: RVO:61388963 Keywords : halogenated molecules * noncovalent interactions * benchmark calculations Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 5.389, year: 2012
Benchmark calculations of thermal reaction rates. I - Quantal scattering theory
Chatfield, David C.; Truhlar, Donald G.; Schwenke, David W.
1991-01-01
The thermal rate coefficient for the prototype reaction H + H2 yields H2 + H with zero total angular momentum is calculated by summing, averaging, and numerically integrating state-to-state reaction probabilities calculated by time-independent quantum-mechanical scattering theory. The results are very carefully converged with respect to all numerical parameters in order to provide high-precision benchmark results for confirming the accuracy of new methods and testing their efficiency.
Los Alamos benchmarks: calculations based on ENDF/B-V data
International Nuclear Information System (INIS)
Kidman, R.B.
1981-11-01
The new and revised benchmark specifications for nine Los Alamos National Laboratory critical assemblies are used to compute the entire set of parameters that were measured in the experiments. A comparison between the computed and experimental values provides a measure of the adequacy of the specifications, cross sections, and physics codes used in the calculations
Benchmark density functional theory calculations for nanoscale conductance
DEFF Research Database (Denmark)
Strange, Mikkel; Bækgaard, Iben Sig Buur; Thygesen, Kristian Sommer
2008-01-01
We present a set of benchmark calculations for the Kohn-Sham elastic transmission function of five representative single-molecule junctions. The transmission functions are calculated using two different density functional theory methods, namely an ultrasoft pseudopotential plane-wave code...... in combination with maximally localized Wannier functions and the norm-conserving pseudopotential code SIESTA which applies an atomic orbital basis set. All calculations have been converged with respect to the supercell size and the number of k(parallel to) points in the surface plane. For all systems we find...
JNC results of BN-600 benchmark calculation (phase 4)
International Nuclear Information System (INIS)
Ishikawa, Makoto
2003-01-01
The present work is the results of JNC, Japan, for the Phase 4 of the BN-600 core benchmark problem (Hex-Z fully MOX fuelled core model) organized by IAEA. The benchmark specification is based on 1) the RCM report of IAEA CRP on 'Updated Codes and Methods to Reduce the Calculational Uncertainties of LMFR Reactivity Effects, Action 3.12' (Calculations for BN-600 fully fuelled MOX core for subsequent transient analyses). JENDL-3.2 nuclear data library was used for calculating 70 group ABBN-type group constants. Cell models for fuel assembly and control rod calculations were applied: homogeneous and heterogeneous (cylindrical supercell) model. Basic diffusion calculation was three-dimensional Hex-Z model, 18 group (Citation code). Transport calculations were 18 group, three-dimensional (NSHEC code) based on Sn-transport nodal method developed at JNC. The generated thermal power per fission was based on Sher's data corrected on the basis of ENDF/B-IV data library. Calculation results are presented in Tables for intercomparison
Reactor critical benchmark calculations for burnup credit applications
International Nuclear Information System (INIS)
Renier, J.P.; Parks, C.V.
1990-01-01
In the criticality safety analyses for the development and certification of spent fuel casks, the current approach requires the assumption of ''fresh fuel'' isotopics. It has been shown that the removal of the ''fresh fuel'' assumption and the use of spent fuel isotopics (''burnup credit'') greatly increases the payload of spent fuel casks by reducing the reactivity of the fuel. Regulatory approval of burnup credit and the requirements of ANSI/ANS 8.1 specify that calculational methods for away-from-reactor criticality safety analyses be validated against experimental measurements. Criticality analyses for low-enriched lattices of fuel pins using the ''fresh fuel isotopics'' assumption have been widely benchmarked against applicable critical experiments. However, the same computational methods have not been benchmarked against criticals containing spent fuel because of the non-existence of spent fuel critical experiments. Commercial reactors offer an excellent and inexhaustible source of critical configurations against which criticality analyses can be benchmarked for spent fuel configurations. This document provides brief descriptions of the benchmarks and the computational methods for the criticality analyses. 8 refs., 1 fig., 1 tab
Monte Carlo benchmark calculations for 400MWTH PBMR core
International Nuclear Information System (INIS)
Kim, H. C.; Kim, J. K.; Kim, S. Y.; Noh, J. M.
2007-01-01
A large interest in high-temperature gas-cooled reactors (HTGR) has been initiated in connection with hydrogen production in recent years. In this study, as a part of work for establishing Monte Carlo computation system for HTGR core analysis, some benchmark calculations for pebble-type HTGR were carried out using MCNP5 code. The core of the 400MW t h Pebble-bed Modular Reactor (PBMR) was selected as a benchmark model. Recently, the IAEA CRP5 neutronics and thermal-hydraulics benchmark problem was proposed for the testing of existing methods for HTGRs to analyze the neutronics and thermal-hydraulic behavior for the design and safety evaluations of the PBMR. This study deals with the neutronic benchmark problems, for fresh fuel and cold conditions (Case F-1), and first core loading with given number densities (Case F-2), proposed for PBMR. After the detailed MCNP modeling of the whole facility, benchmark calculations were performed. Spherical fuel region of a fuel pebble is divided into cubic lattice element in order to model a fuel pebble which contains, on average, 15000 CFPs (Coated Fuel Particles). Each element contains one CFP at its center. In this study, the side length of each cubic lattice element to have the same amount of fuel was calculated to be 0.1635 cm. The remaining volume of each lattice element was filled with graphite. All of different 5 concentric shells of CFP were modeled. The PBMR annular core consists of approximately 452000 pebbles in the benchmark problems. In Case F-1 where the core was filled with only fresh fuel pebble, a BCC(body-centered-cubic) lattice model was employed in order to achieve the random packing core with the packing fraction of 0.61. The BCC lattice was also employed with the size of the moderator pebble increased in a manner that reproduces the specified F/M ratio of 1:2 while preserving the packing fraction of 0.61 in Case F-2. The calculations were pursued with ENDF/B-VI cross-section library and used sab2002 S(α,
Czech Academy of Sciences Publication Activity Database
Mládek, Arnošt; Krepl, Miroslav; Svozil, Daniel; Čech, P.; Otyepka, M.; Banáš, P.; Zgarbová, M.; Jurečka, P.; Šponer, Jiří
2013-01-01
Roč. 15, č. 19 (2013), s. 7295-7310 ISSN 1463-9076 R&D Projects: GA ČR(CZ) GAP208/11/1822 Grant - others:GA MŠk(CZ) ED1.1.00/02.0068 Program:ED Institutional research plan: CEZ:AV0Z50040702 Institutional support: RVO:68081707 Keywords : GAUSSIAN-BASIS SETS * GENERALIZED GRADIENT APPROXIMATION * CORRELATED MOLECULAR CALCULATIONS Subject RIV: BO - Biophysics Impact factor: 4.198, year: 2013
Actinides transmutation - a comparison of results for PWR benchmark
International Nuclear Information System (INIS)
Claro, Luiz H.
2009-01-01
The physical aspects involved in the Partitioning and Transmutation (P and T) of minor actinides (MA) and fission products (FP) generated by reactors PWR are of great interest in the nuclear industry. Besides these the reduction in the storage of radioactive wastes are related with the acceptability of the nuclear electric power. From the several concepts for partitioning and transmutation suggested in literature, one of them involves PWR reactors to burn the fuel containing plutonium and minor actinides reprocessed of UO 2 used in previous stages. In this work are presented the results of the calculations of a benchmark in P and T carried with WIMSD5B program using its new cross sections library generated from the ENDF-B-VII and the comparison with the results published in literature by other calculations. For comparison, was used the benchmark transmutation concept based in a typical PWR cell and the analyzed results were the k∞ and the atomic density of the isotopes Np-239, Pu-241, Pu-242 and Am-242m, as function of burnup considering discharge of 50 GWd/tHM. (author)
Benchmark Kantorovich calculations for three particles on a line
Energy Technology Data Exchange (ETDEWEB)
Chuluunbaatar, O [Joint Institute for Nuclear Research, Dubna, Moscow region 141980 (Russian Federation); Gusev, A A [Joint Institute for Nuclear Research, Dubna, Moscow region 141980 (Russian Federation); Kaschiev, M S [Institute of Mathematics and Informatics, Sofia (Bulgaria); Kaschieva, V A [Department of Mathematics, Technical University, Sofia (Bulgaria); Amaya-Tapia, A [Centro de Ciencias Fisicas, UNAM, Cuernavaca, Morelos (Mexico); Larsen, S Y [Temple University, Philadelphia (United States); Vinitsky, S I [Joint Institute for Nuclear Research, Dubna, Moscow region 141980 (Russian Federation)
2006-01-28
A Kantorovich approach is used to solve for the eigenvalue and the scattering properties associated with a multi-dimensional Schroedinger equation. It is developed within the framework of a conventional finite element representation of solutions over a hyperspherical coordinate space. Convergence and efficiency of the proposed schemes are demonstrated in the case of an exactly solvable 'benchmark' model of three identical particles on a line, with zero-range attractive pair potentials and below the three-body threshold. In this model all the 'effective' potentials, and 'coupling matrix elements', of the set of resulting close-coupling radial equations, are calculated using analytical formulae. Variational formulations are developed for both the bound-state energy and the elastic scattering problem. The corresponding numerical schemes are devised using a finite element method of high order accuracy.
Proposal of a benchmark for core burnup calculations for a VVER-1000 reactor core
International Nuclear Information System (INIS)
Loetsch, T.; Khalimonchuk, V.; Kuchin, A.
2009-01-01
In the framework of a project supported by the German BMU the code DYN3D should be further validated and verified. During the work a lack of a benchmark on core burnup calculations for VVER-1000 reactors was noticed. Such a benchmark is useful for validating and verifying the whole package of codes and data libraries for reactor physics calculations including fuel assembly modelling, fuel assembly data preparation, few group data parametrisation and reactor core modelling. The benchmark proposed specifies the core loading patterns of burnup cycles for a VVER-1000 reactor core as well as a set of operational data such as load follow, boron concentration in the coolant, cycle length, measured reactivity coefficients and power density distributions. The reactor core characteristics chosen for comparison and the first results obtained during the work with the reactor physics code DYN3D are presented. This work presents the continuation of efforts of the projects mentioned to estimate the accuracy of calculated characteristics of VVER-1000 reactor cores. In addition, the codes used for reactor physics calculations of safety related reactor core characteristics should be validated and verified for the cases in which they are to be used. This is significant for safety related evaluations and assessments carried out in the framework of licensing and supervision procedures in the field of reactor physics. (authors)
International Nuclear Information System (INIS)
Carew, John F.; Finch, Stephen J.; Lois, Lambros
2003-01-01
this analysis are generic and may be applied in benchmarking applications where the M/C comparisons are used to determine an adjustment of the calculations
Subgroup Benchmark Calculations for the Intra-Pellet Nonuniform Temperature Cases
Energy Technology Data Exchange (ETDEWEB)
Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jung, Yeon Sang [Seoul National Univ. (Korea, Republic of); Liu, Yuxuan [Univ. of Michigan, Ann Arbor, MI (United States); Joo, Han Gyu [Seoul National Univ. (Korea, Republic of)
2016-08-01
A benchmark suite has been developed by Seoul National University (SNU) for intrapellet nonuniform temperature distribution cases based on the practical temperature profiles according to the thermal power levels. Though a new subgroup capability for nonuniform temperature distribution was implemented in MPACT, no validation calculation has been performed for the new capability. This study focuses on bench-marking the new capability through a code-to-code comparison. Two continuous-energy Monte Carlo codes, McCARD and CE-KENO, are engaged in obtaining reference solutions, and the MPACT results are compared to the SNU nTRACER using a similar cross section library and subgroup method to obtain self-shielded cross sections.
OECD/NEA burnup credit calculational criticality benchmark Phase I-B results
International Nuclear Information System (INIS)
DeHart, M.D.; Parks, C.V.; Brady, M.C.
1996-06-01
In most countries, criticality analysis of LWR fuel stored in racks and casks has assumed that the fuel is fresh with the maximum allowable initial enrichment. This assumption has led to the design of widely spaced and/or highly poisoned storage and transport arrays. If credit is assumed for fuel burnup, initial enrichment limitations can be raised in existing systems, and more compact and economical arrays can be designed. Such reliance on the reduced reactivity of spent fuel for criticality control is referred to as burnup credit. The Burnup Credit Working Group, formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development, has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods agree to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods agree within 11% about the average for all fission products studied. Most deviations are less than 10%, and many are less than 5%. The exceptions are Sm 149, Sm 151, and Gd 155
OECD/NEA Burnup Credit Calculational Criticality Benchmark Phase I-B Results
Energy Technology Data Exchange (ETDEWEB)
DeHart, M.D.
1993-01-01
Burnup credit is an ongoing technical concern for many countries that operate commercial nuclear power reactors. In a multinational cooperative effort to resolve burnup credit issues, a Burnup Credit Working Group has been formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development. This working group has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide, and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods are in agreement to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods are within 11% agreement about the average for all fission products studied. Furthermore, most deviations are less than 10%, and many are less than 5%. The exceptions are {sup 149}Sm, {sup 151}Sm, and {sup 155}Gd.
OECD/NEA burnup credit calculational criticality benchmark Phase I-B results
Energy Technology Data Exchange (ETDEWEB)
DeHart, M.D.; Parks, C.V. [Oak Ridge National Lab., TN (United States); Brady, M.C. [Sandia National Labs., Las Vegas, NV (United States)
1996-06-01
In most countries, criticality analysis of LWR fuel stored in racks and casks has assumed that the fuel is fresh with the maximum allowable initial enrichment. This assumption has led to the design of widely spaced and/or highly poisoned storage and transport arrays. If credit is assumed for fuel burnup, initial enrichment limitations can be raised in existing systems, and more compact and economical arrays can be designed. Such reliance on the reduced reactivity of spent fuel for criticality control is referred to as burnup credit. The Burnup Credit Working Group, formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development, has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods agree to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods agree within 11% about the average for all fission products studied. Most deviations are less than 10%, and many are less than 5%. The exceptions are Sm 149, Sm 151, and Gd 155.
Calculation of WWER-440 nuclide benchmark (CB2)
International Nuclear Information System (INIS)
Prodanova, R
2005-01-01
The present paper is intended to show the results, obtained at the INRNE, Sofia, Bulgaria on the benchmark task, announced by L. Markova at the sixth Symposium of AE, Kirkkonummi Finland 1996 (Authors)
Energy Technology Data Exchange (ETDEWEB)
Renner, Franziska [Physikalisch-Technische Bundesanstalt (PTB), Braunschweig (Germany)
2016-11-01
Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide.
239Pu Resonance Evaluation for Thermal Benchmark System Calculations
Leal, L. C.; Noguere, G.; de Saint Jean, C.; Kahler, A. C.
2014-04-01
Analyses of thermal plutonium solution critical benchmark systems have indicated a deficiency in the 239Pu resonance evaluation. To investigate possible solutions to this issue, the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) Working Party for Evaluation Cooperation (WPEC) established Subgroup 34 to focus on the reevaluation of the 239Pu resolved resonance parameters. In addition, the impacts of the prompt neutron multiplicity (νbar) and the prompt neutron fission spectrum (PFNS) have been investigated. The objective of this paper is to present the results of the 239Pu resolved resonance evaluation effort.
International Nuclear Information System (INIS)
Chen Fubing; Dong Yujie; Zheng Yanhua; Shi Lei; Zhang Zuoyi
2009-01-01
Within the framework of a Coordinated Research Project on Evaluation of High Temperature Gas-Cooled Reactor Performance (CRP-5) initiated by the International Atomic Energy Agency (IAEA), the calculation of steady-state temperature distribution of the 10 MW High Temperature Gas-Cooled Reactor-Test Module (HTR-10) under its initial full power experimental operation has been defined as one of the benchmark problems. This paper gives the investigation results obtained by different countries who participate in solving this benchmark problem. The validation works of the THERMIX code used by the Institute of Nuclear and New Energy Technology (INET) are also presented. For the benchmark items defined in this CRP, various calculation results correspond well with each other and basically agree the experimental results. Discrepancies existing among various code results are preliminarily attributed to different methods, models, material properties, and so on used in the computations. Temperatures calculated by THERMIX for the measuring points in the reactor internals agree well with the experimental values. The maximum fuel center temperatures calculated by the participants are much lower than the limited value of 1,230degC. According to the comparison results of code-to-code as well as code-to-experiment, THERMIX is considered to reproduce relatively satisfactory results for the CRP-5 benchmark problem. (author)
Aagesta-BR3 Decommissioning Cost. Comparison and Benchmarking Analysis
International Nuclear Information System (INIS)
Varley, Geoff
2002-11-01
decontamination activity. A factor of 15 different in ion exchange resin volume is puzzling and no explanation has been found, other than that the volumes were 'estimated' rather than calculated in accordance with a clearly defined method statement. The work analysed for comparisons included preparation work before the actual dismantling of the primary pipes and the auxiliary circuits plus the actual cutting of the primary pipes into small pieces of 0.8 m long to fit in the chemical reactor of the BR3 decontamination process. The estimated grand total resources required was 4,734 hours, for a unit requirement of 740 hours/MT. This is very close to the Westinghouse Test reactor benchmark figure for this activity. The total resource requirements for primary pipework dismantling were dominated by the preparatory activities rather than the cutting activity itself. A comparison with the Aagesta cost estimate would be possible only if a more detailed breakdown of projected manhour information could be provided for Aagesta. The basis for the preparatory work and actual removal of the Aagesta RPV is unclear and may not have been particularly rigorous. Comparison with BR3 benchmarking data suggests that the Aagesta estimate for the RPV could be significantly low. However, actual experience of steam generator (SG) removal at Aagesta provides evidence of very efficient execution of similar work, which might contradict the BR3 experience. The available data does not support reaching a detailed conclusion. A clear possibility is that the nature of the two jobs (SG and RPV) is radically different, either due to size, radiological conditions, physical access etc., or a combination of all factors, with RPV work being more demanding. If this is correct, the Aagesta SG experience may not be particularly relevant, whilst the BR3 experience would indicate the need for further scrutiny of the Aagesta RPV estimate
Benchmark calculations on residue production within the EURISOL DS project; Part I: thin targets
David, J.C; Boudard, A; Doré, D; Leray, S; Rapp, B; Ridikas, D; Thiollière, N
Report on benchmark calculations on residue production in thin targets. Calculations were performed using MCNPX 2.5.0 coupled to a selection of reaction models. The results were compared to nuclide production cross-sections measured in GSI in inverse kinematics
Benchmark calculations on residue production within the EURISOL DS project; Part II: thick targets
David, J.-C; Boudard, A; Doré, D; Leray, S; Rapp, B; Ridikas, D; Thiollière, N
Benchmark calculations on residue production using MCNPX 2.5.0. Calculations were compared to mass-distribution data for 5 different elements measured at ISOLDE, and to specific activities of 28 radionuclides in different places along the thick target measured in Dubna.
Benchmark calculations for evaluation methods of gas volumetric leakage rate
International Nuclear Information System (INIS)
Asano, R.; Aritomi, M.; Matsuzaki, M.
1998-01-01
A containment function of radioactive materials transport casks is essential for safe transportation to prevent the radioactive materials from being released into environment. Regulations such as IAEA standard determined the limit of radioactivity to be released. Since is not practical for the leakage tests to measure directly the radioactivity release from a package, as gas volumetric leakages rates are proposed in ANSI N14.5 and ISO standards. In our previous works, gas volumetric leakage rates for several kinds of gas from various leaks were measured and two evaluation methods, 'a simple evaluation method' and 'a strict evaluation method', were proposed based on the results. The simple evaluation method considers the friction loss of laminar flow with expansion effect. The strict evaluating method considers an exit loss in addition to the friction loss. In this study, four worked examples were completed for on assumed large spent fuel transport cask (Type B Package) with wet or dry capacity and at three transport conditions; normal transport with intact fuels or failed fuels, and an accident in transport. The standard leakage rates and criteria for two kinds of leak test were calculated for each example by each evaluation method. The following observations are made based upon the calculations and evaluations: the choked flow model of ANSI method greatly overestimates the criteria for tests ; the laminar flow models of both ANSI and ISO methods slightly overestimate the criteria for tests; the above two results are within the design margin for ordinary transport condition and all methods are useful for the evaluation; for severe condition such as failed fuel transportation, it should pay attention to apply a choked flow model of ANSI method. (authors)
BENCHMARKING UPGRADED HOTSPOT DOSE CALCULATIONS AGAINST MACCS2 RESULTS
Energy Technology Data Exchange (ETDEWEB)
Brotherton, Kevin
2009-04-30
The radiological consequence of interest for a documented safety analysis (DSA) is the centerline Total Effective Dose Equivalent (TEDE) incurred by the Maximally Exposed Offsite Individual (MOI) evaluated at the 95th percentile consequence level. An upgraded version of HotSpot (Version 2.07) has been developed with the capabilities to read site meteorological data and perform the necessary statistical calculations to determine the 95th percentile consequence result. These capabilities should allow HotSpot to join MACCS2 (Version 1.13.1) and GENII (Version 1.485) as radiological consequence toolbox codes in the Department of Energy (DOE) Safety Software Central Registry. Using the same meteorological data file, scenarios involving a one curie release of {sup 239}Pu were modeled in both HotSpot and MACCS2. Several sets of release conditions were modeled, and the results compared. In each case, input parameter specifications for each code were chosen to match one another as much as the codes would allow. The results from the two codes are in excellent agreement. Slight differences observed in results are explained by algorithm differences.
Benchmark Calculations for Electron Collisions with Complex Atoms
International Nuclear Information System (INIS)
Zatsarinny, Oleg; Bartschat, Klaus
2014-01-01
The B-spline R-matrix (BSR) approach [1,2] is based on the non-perturbative close-coupling method. As such it is, in principle, based on an exact expansion of the solution of the time-independent Schrödinger equation, as an infinite sum/integral of N-electron target states coupled to the wave function of the scattering projectile. The N-electron target states, again, can in principle be calculated with almost arbitrary accuracy using sufficiently large configuration-interaction expansions and the correct interaction hamiltonian. In practice, of course, the infinite expansions have to be cut off in some way and the exact hamiltonian may not be available. In the collision part of the BSR method, the integral over the ionization continuum and the infinite sum over high-lying Rydberg states are replaced by a finite sum over square-integrable pseudo-states. Also, a number of inner shells are treated as (partially) inert, i.e., a minimum number of electrons are required in those subshells.
Benchmark calculations on fluid coupled co-axial cylinders typical to LMFBR structures
International Nuclear Information System (INIS)
Dostal, M.; Descleve, P.; Gantenbein, F.; Lazzeri, L.
1983-01-01
This paper describes a joint effort promoted and funded by the Commission of European Community under the umbrella of Fast Reactor Co-ordinating Committee and working group on Codes and Standards No. 2 with the purpose to test several programs currently used for dynamic analysis of fluid-coupled structures. The scope of the benchmark calculations is limited to beam type modes of vibration, small displacement of the structures and small pressure variation such as encountered in seismic or flow induced vibration problems. Five computer codes were used: ANSYS, AQUAMODE, NOVAX, MIAS/SAP4 and ZERO where each program employs a different structural-fluid formulation. The calculations were performed for four different geometrical configurations of concentric cylinders where the effect of gap size, water level, and support conditions were considered. The analytical work was accompanied by experiments carried out on a purpose-built rig. The test rig consisted of two concentric cylinders independently supported on flexible cantilevers. A geometrical simplicity and attention in the rig design to eliminate the structural coupling between the cylinders lead to unambiguous test results. Only the beam natural frequencies, in phase and out of phase were measured. The comparison of different analytical methods and experimental results is presented and discussed. The degree of agreement varied between very good and unacceptable. (orig./GL)
Benchmark calculations of the solution-fuel criticality experiments by SRAC code system
International Nuclear Information System (INIS)
Senuma, Ichiro; Miyoshi, Yoshinori; Suzaki, Takenori; Kobayashi, Iwao
1984-06-01
Benchmark calculations were performed by using newly developed SRAC (Standard Reactor Analysis Code) system and nuclear data library based upon JENDL-2. The 34 benchmarks include variety of composition, concentration and configuration of Pu homogeneous and U/Pu homogeneous systems (nitrate, mainly), also include UO 2 /PuO 2 rods in fissile solution: a simplified model of the dissolver process of the fuel reprocessing plant. Calculation results shows good agreement with Monte Carlo method. This code-evaluation work has been done for the the part of the Detailed Design of CSEF (Critical Satety Experimental Facility), which is now in Progress. (author)
International Nuclear Information System (INIS)
Gillan, M. J.; Alfè, D.; Manby, F. R.
2015-01-01
The quantum Monte Carlo (QMC) technique is used to generate accurate energy benchmarks for methane-water clusters containing a single methane monomer and up to 20 water monomers. The benchmarks for each type of cluster are computed for a set of geometries drawn from molecular dynamics simulations. The accuracy of QMC is expected to be comparable with that of coupled-cluster calculations, and this is confirmed by comparisons for the CH 4 -H 2 O dimer. The benchmarks are used to assess the accuracy of the second-order Møller-Plesset (MP2) approximation close to the complete basis-set limit. A recently developed embedded many-body technique is shown to give an efficient procedure for computing basis-set converged MP2 energies for the large clusters. It is found that MP2 values for the methane binding energies and the cohesive energies of the water clusters without methane are in close agreement with the QMC benchmarks, but the agreement is aided by partial cancelation between 2-body and beyond-2-body errors of MP2. The embedding approach allows MP2 to be applied without loss of accuracy to the methane hydrate crystal, and it is shown that the resulting methane binding energy and the cohesive energy of the water lattice agree almost exactly with recently reported QMC values
D.C. Blitz (David)
2011-01-01
textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.
DEFF Research Database (Denmark)
Grandjean, Philippe; Budtz-Joergensen, Esben
2013-01-01
follow-up of a Faroese birth cohort were used. Serum-PFC concentrations were measured at age 5 years, and serum antibody concentrations against tetanus and diphtheria toxoids were obtained at ages 7 years. Benchmark dose results were calculated in terms of serum concentrations for 431 children...
A benchmark for comparison of cell tracking algorithms.
Maška, Martin; Ulman, Vladimír; Svoboda, David; Matula, Pavel; Matula, Petr; Ederra, Cristina; Urbiola, Ainhoa; España, Tomás; Venkatesan, Subramanian; Balak, Deepak M W; Karas, Pavel; Bolcková, Tereza; Streitová, Markéta; Carthel, Craig; Coraluppi, Stefano; Harder, Nathalie; Rohr, Karl; Magnusson, Klas E G; Jaldén, Joakim; Blau, Helen M; Dzyubachyk, Oleh; Křížek, Pavel; Hagen, Guy M; Pastor-Escuredo, David; Jimenez-Carretero, Daniel; Ledesma-Carbayo, Maria J; Muñoz-Barrutia, Arrate; Meijering, Erik; Kozubek, Michal; Ortiz-de-Solorzano, Carlos
2014-06-01
Automatic tracking of cells in multidimensional time-lapse fluorescence microscopy is an important task in many biomedical applications. A novel framework for objective evaluation of cell tracking algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2013 Cell Tracking Challenge. In this article, we present the logistics, datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of a comprehensive video dataset repository and the definition of objective measures for comparison and ranking of the algorithms. With this benchmark, six algorithms covering a variety of segmentation and tracking paradigms have been compared and ranked based on their performance on both synthetic and real datasets. Given the diversity of the datasets, we do not declare a single winner of the challenge. Instead, we present and discuss the results for each individual dataset separately. The challenge Web site (http://www.codesolorzano.com/celltrackingchallenge) provides access to the training and competition datasets, along with the ground truth of the training videos. It also provides access to Windows and Linux executable files of the evaluation software and most of the algorithms that competed in the challenge. © The Author 2014. Published by Oxford University Press.
The fifth Atomic Energy Research dynamic benchmark calculation with HEXTRAN-SMABRE
International Nuclear Information System (INIS)
Haenaelaeinen, Anitta
1998-01-01
The fifth Atomic Energy Research dynamic benchmark is the first Atomic Energy Research benchmark for coupling of the thermohydraulic codes and three-dimensional reactor dynamic core models. In VTT HEXTRAN 2.7 is used for the core dynamics and SMABRE 4.6 as a thermohydraulic model for the primary and secondary loops. The plant model for SMABRE is based mainly on two input models. the Loviisa model and standard WWER-440/213 plant model. The primary circuit includes six separate loops, totally 505 nodes and 652 junctions. The reactor pressure vessel is divided into six parallel channels. In HEXTRAN calculation 176 symmetry is used in the core. In the sequence of main steam header break at the hot standby state, the liquid temperature is decreased symmetrically in the core inlet which leads to return to power. In the benchmark, no isolations of the steam generators are assumed and the maximum core power is about 38 % of the nominal power at four minutes after the break opening in the HEXTRAN-SMABRE calculation. Due to boric acid in the high pressure safety injection water, the power finally starts to decrease. The break flow is pure steam in the HEXTRAN-SMABRE calculation during the whole transient even in the swell levels in the steam generators are very high due to flashing. Because of sudden peaks in the preliminary results of the steam generator heat transfer, the SMABRE drift-flux model was modified. The new model is a simplified version of the EPRI correlation based on test data. The modified correlation behaves smoothly. In the calculations nuclear data is based on the ENDF/B-IV library and it has been evaluated with the CASMO-HEX code. The importance of the nuclear data was illustrated by repeating the benchmark calculation with using three different data sets. Optimal extensive data valid from hot to cold conditions were not available for all types of fuel enrichments needed in this benchmark.(Author)
Validation of VHTRC calculation benchmark of critical experiment using the MCB code
Directory of Open Access Journals (Sweden)
Stanisz Przemysław
2016-01-01
Full Text Available The calculation benchmark problem Very High Temperature Reactor Critical (VHTR a pin-in-block type core critical assembly has been investigated with the Monte Carlo Burnup (MCB code in order to validate the latest version of Nuclear Data Library based on ENDF format. Executed benchmark has been made on the basis of VHTR benchmark available from the International Handbook of Evaluated Reactor Physics Benchmark Experiments. This benchmark is useful for verifying the discrepancies in keff values between various libraries and experimental values. This allows to improve accuracy of the neutron transport calculations that may help in designing the high performance commercial VHTRs. Almost all safety parameters depend on the accuracy of neutron transport calculation results that, in turn depend on the accuracy of nuclear data libraries. Thus, evaluation of the libraries applicability to VHTR modelling is one of the important subjects. We compared the numerical experiment results with experimental measurements using two versions of available nuclear data (ENDF-B-VII.1 and JEFF-3.2 prepared for required temperatures. Calculations have been performed with the MCB code which allows to obtain very precise representation of complex VHTR geometry, including the double heterogeneity of a fuel element. In this paper, together with impact of nuclear data, we discuss also the impact of different lattice modelling inside the fuel pins. The discrepancies of keff have been successfully observed and show good agreement with each other and with the experimental data within the 1 σ range of the experimental uncertainty. Because some propagated discrepancies observed, we proposed appropriate corrections in experimental constants which can improve the reactivity coefficient dependency. Obtained results confirm the accuracy of the new Nuclear Data Libraries.
OECD/NEA benchmark for time-dependent neutron transport calculations without spatial homogenization
Energy Technology Data Exchange (ETDEWEB)
Hou, Jason, E-mail: jason.hou@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Ivanov, Kostadin N. [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Boyarinov, Victor F.; Fomichenko, Peter A. [National Research Centre “Kurchatov Institute”, Kurchatov Sq. 1, Moscow (Russian Federation)
2017-06-15
Highlights: • A time-dependent homogenization-free neutron transport benchmark was created. • The first phase, known as the kinetics phase, was described in this work. • Preliminary results for selected 2-D transient exercises were presented. - Abstract: A Nuclear Energy Agency (NEA), Organization for Economic Co-operation and Development (OECD) benchmark for the time-dependent neutron transport calculations without spatial homogenization has been established in order to facilitate the development and assessment of numerical methods for solving the space-time neutron kinetics equations. The benchmark has been named the OECD/NEA C5G7-TD benchmark, and later extended with three consecutive phases each corresponding to one modelling stage of the multi-physics transient analysis of the nuclear reactor core. This paper provides a detailed introduction of the benchmark specification of Phase I, known as the “kinetics phase”, including the geometry description, supporting neutron transport data, transient scenarios in both two-dimensional (2-D) and three-dimensional (3-D) configurations, as well as the expected output parameters from the participants. Also presented are the preliminary results for the initial state 2-D core and selected transient exercises that have been obtained using the Monte Carlo method and the Surface Harmonic Method (SHM), respectively.
Benchmark Problems of the Geothermal Technologies Office Code Comparison Study
Energy Technology Data Exchange (ETDEWEB)
White, Mark D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Podgorney, Robert [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kelkar, Sharad M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McClure, Mark W. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Danko, George [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ghassemi, Ahmad [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fu, Pengcheng [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bahrami, Davood [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Barbier, Charlotte [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cheng, Qinglu [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Chiu, Kit-Kwan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Detournay, Christine [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elsworth, Derek [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fang, Yi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Furtney, Jason K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gan, Quan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gao, Qian [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Guo, Bin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hao, Yue [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Horne, Roland N. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Huang, Kai [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Im, Kyungjae [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Norbeck, Jack [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rutqvist, Jonny [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Safari, M. R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sesetty, Varahanaresh [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sonnenthal, Eric [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Tao, Qingfeng [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); White, Signe K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wong, Yang [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xia, Yidong [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2016-12-02
A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office has sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. Study participants submitted solutions to problems for which their simulation tools were deemed capable or nearly capable. Some participating codes were originally developed for EGS applications whereas some others were designed for different applications but can simulate processes similar to those in EGS. Solution submissions from both were encouraged. In some cases, participants made small incremental changes to their numerical simulation codes to address specific elements of the problem, and in other cases participants submitted solutions with existing simulation tools, acknowledging the limitations of the code. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems
EA-MC Neutronic Calculations on IAEA ADS Benchmark 3.2
Energy Technology Data Exchange (ETDEWEB)
Dahlfors, Marcus [Uppsala Univ. (Sweden). Dept. of Radiation Sciences; Kadi, Yacine [CERN, Geneva (Switzerland). Emerging Energy Technologies
2006-01-15
The neutronics and the transmutation properties of the IAEA ADS benchmark 3.2 setup, the 'Yalina' experiment or ISTC project B-70, have been studied through an extensive amount of 3-D Monte Carlo calculations at CERN. The simulations were performed with the state-of-the-art computer code package EA-MC, developed at CERN. The calculational approach is outlined and the results are presented in accordance with the guidelines given in the benchmark description. A variety of experimental conditions and parameters are examined; three different fuel rod configurations and three types of neutron sources are applied to the system. Reactivity change effects introduced by removal of fuel rods in both central and peripheral positions are also computed. Irradiation samples located in a total of 8 geometrical positions are examined. Calculations of capture reaction rates in {sup 129}I, {sup 237}Np and {sup 243}Am samples and of fission reaction rates in {sup 235}U, {sup 237}Np and {sup 243}Am samples are presented. Simulated neutron flux densities and energy spectra as well as spectral indices inside experimental channels are also given according to benchmark specifications. Two different nuclear data libraries, JAR-95 and JENDL-3.2, are applied for the calculations.
RTU Comparison Calculator Enhancement Plan
Energy Technology Data Exchange (ETDEWEB)
Miller, James D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wang, Weimin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Katipamula, Srinivas [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-07-01
Over the past two years, Department of Energy’s Building Technologies Office (BTO) has been investigating ways to increase the operating efficiency of the packaged rooftop units (RTUs) in the field. First, by issuing a challenge to the RTU manufactures to increase the integrated energy efficiency ratio (IEER) by 60% over the existing ASHRAE 90.1-2010 standard. Second, by evaluating the performance of an advanced RTU controller that reduces the energy consumption by over 40%. BTO has previously also funded development of a RTU comparison calculator (RTUCC). RTUCC is a web-based tool that provides the user a way to compare energy and cost savings for two units with different efficiencies. However, the RTUCC currently cannot compare savings associated with either the RTU Challenge unit or the advanced RTU controls retrofit. Therefore, BTO has asked PNNL to enhance the tool so building owners can compare energy and savings associated with this new class of products. This document provides the details of the enhancements that are required to support estimating energy savings from use of RTU challenge units or advanced controls on existing RTUs.
RTU Comparison Calculator Enhancement Plan
Energy Technology Data Exchange (ETDEWEB)
Miller, James D.; Wang, Weimin; Katipamula, Srinivas
2014-03-31
Over the past two years, Department of Energy’s Building Technologies Office (BTO) has been investigating ways to increase the operating efficiency of the packaged rooftop units (RTUs) in the field. First, by issuing a challenge to the RTU manufactures to increase the integrated energy efficiency ratio (IEER) by 60% over the existing ASHRAE 90.1-2010 standard. Second, by evaluating the performance of an advanced RTU controller that reduces the energy consumption by over 40%. BTO has previously also funded development of a RTU comparison calculator (RTUCC). RTUCC is a web-based tool that provides the user a way to compare energy and cost savings for two units with different efficiencies. However, the RTUCC currently cannot compare savings associated with either the RTU Challenge unit or the advanced RTU controls retrofit. Therefore, BTO has asked PNNL to enhance the tool so building owners can compare energy and savings associated with this new class of products. This document provides the details of the enhancements that are required to support estimating energy savings from use of RTU challenge units or advanced controls on existing RTUs.
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen
2016-01-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.
2016-12-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
COVE 2A Benchmarking calculations using NORIA; Yucca Mountain Site Characterization Project
Energy Technology Data Exchange (ETDEWEB)
Carrigan, C.R.; Bixler, N.E.; Hopkins, P.L.; Eaton, R.R.
1991-10-01
Six steady-state and six transient benchmarking calculations have been performed, using the finite element code NORIA, to simulate one-dimensional infiltration into Yucca Mountain. These calculations were made to support the code verification (COVE 2A) activity for the Yucca Mountain Site Characterization Project. COVE 2A evaluates the usefulness of numerical codes for analyzing the hydrology of the potential Yucca Mountain site. Numerical solutions for all cases were found to be stable. As expected, the difficulties and computer-time requirements associated with obtaining solutions increased with infiltration rate. 10 refs., 128 figs., 5 tabs.
Numerical simulations of concrete flow: A benchmark comparison
DEFF Research Database (Denmark)
Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano
2016-01-01
First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...
Hextran-Smabre calculation of the VVER-1000 coolant transient benchmark
Energy Technology Data Exchange (ETDEWEB)
Elina Syrjaelahti; Anitta Haemaelaeinen [VTT Processes, P.O.Box 1604, FIN-02044 VTT (Finland)
2005-07-01
Full text of publication follows: The VVER-1000 Coolant Transient benchmark is intended for validation of couplings of the thermal hydraulic codes and three dimensional neutron kinetic core models. It concerns a switching on a main coolant pump when the other three main coolant pumps are in operation. Problem is based on experiment performed in Kozloduy NPP in Bulgaria. In addition to the real plant transient, two extreme scenarios concerning control rod ejection after switching on a main coolant pump were calculated. In VTT the three-dimensional advanced nodal code HEXTRAN is used for the core kinetics and dynamics, and thermohydraulic system code SMABRE as a thermal hydraulic model for the primary and secondary loop. Parallelly coupled HEXTRAN-SMABRE code has been in production use since early 90's, and it has been extensively used for analysis of VVER NPPs. The SMABRE input model is based on the standard VVER-1000 input used in VTT. Last plant specific modifications to the input model have been made in EU projects. The whole core calculation is performed in the core with HEXTRAN. Also the core model is based on earlier VVER-1000 models. Nuclear data for the calculation was specified in the benchmark. The paper outlines the input models used for both codes. Calculated results are introduced both for the coupled core system with inlet and outlet boundary conditions and for the whole plant model. Sensitivity studies have been performed for selected parameters. (authors)
Hextran-Smabre calculation of the VVER-1000 coolant transient benchmark
International Nuclear Information System (INIS)
Elina Syrjaelahti; Anitta Haemaelaeinen
2005-01-01
Full text of publication follows: The VVER-1000 Coolant Transient benchmark is intended for validation of couplings of the thermal hydraulic codes and three dimensional neutron kinetic core models. It concerns a switching on a main coolant pump when the other three main coolant pumps are in operation. Problem is based on experiment performed in Kozloduy NPP in Bulgaria. In addition to the real plant transient, two extreme scenarios concerning control rod ejection after switching on a main coolant pump were calculated. In VTT the three-dimensional advanced nodal code HEXTRAN is used for the core kinetics and dynamics, and thermohydraulic system code SMABRE as a thermal hydraulic model for the primary and secondary loop. Parallelly coupled HEXTRAN-SMABRE code has been in production use since early 90's, and it has been extensively used for analysis of VVER NPPs. The SMABRE input model is based on the standard VVER-1000 input used in VTT. Last plant specific modifications to the input model have been made in EU projects. The whole core calculation is performed in the core with HEXTRAN. Also the core model is based on earlier VVER-1000 models. Nuclear data for the calculation was specified in the benchmark. The paper outlines the input models used for both codes. Calculated results are introduced both for the coupled core system with inlet and outlet boundary conditions and for the whole plant model. Sensitivity studies have been performed for selected parameters. (authors)
Energy Technology Data Exchange (ETDEWEB)
Leal, L.C.; Wright, R.Q.
1996-10-01
In this report we investigate the adequacy of the available {sup 233}U cross-section data for calculation of experimental critical systems. The {sup 233}U evaluations provided in two evaluated nuclear data libraries, the U.S. Data Bank [ENDF/B (Evaluated Nuclear Data Files)] and the Japanese Data Bank [JENDL (Japanese Evaluated Nuclear Data Library)] are examined. Calculations were performed for six thermal and ten fast experimental critical systems using the S{sub n} transport XSDRNPM code. To verify the performance of the {sup 233}U cross-section data for nuclear criticality safety application in which the neutron energy spectrum is predominantly in the epithermal energy range, calculations of four numerical benchmark systems with energy spectra in the intermediate energy range were done. These calculations serve only as an indication of the difference in calculated results that may be expected when the two {sup 233}U cross-section evaluations are used for problems with neutron spectra in the intermediate energy range. Additionally, comparisons of experimental and calculated central fission rate ratios were also made. The study has suggested that an ad hoc {sup 233}U evaluation based on the JENDL library provides better overall results for both fast and thermal experimental critical systems.
Energy Technology Data Exchange (ETDEWEB)
Leal, L.C.
1993-01-01
In this report we investigate the adequacy of the available {sup 233}U cross-section data for calculation of experimental critical systems. The {sup 233}U evaluations provided in two evaluated nuclear data libraries, the U. S. Data Bank [ENDF/B (Evaluated Nuclear Data Files)] and the Japanese Data Bank [JENDL (Japanese Evaluated Nuclear Data Library)] are examined. Calculations were performed for six thermal and ten fast experimental critical systems using the Sn transport XSDRNPM code. To verify the performance of the {sup 233}U cross-section data for nuclear criticality safety application in which the neutron energy spectrum is predominantly in the epithermal energy range, calculations of four numerical benchmark systems with energy spectra in the intermediate energy range were done. These calculations serve only as an indication of the difference in calculated results that may be expected when the two {sup 233}U cross-section evaluations are used for problems with neutron spectra in the intermediate energy range. Additionally, comparisons of experimental and calculated central fission rate ratios were also made. The study has suggested that an ad hoc {sup 233}U evaluation based on the JENDL library provides better overall results for both fast and thermal experimental critical systems.
PMLB: a large benchmark suite for machine learning evaluation and comparison.
Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H
2017-01-01
The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.
Calculations to an IAHR-benchmark test using the CFD-code CFX-4
Energy Technology Data Exchange (ETDEWEB)
Krepper, E.
1998-10-01
The calculation concerns a test, which was defined as a benchmark for 3-D codes by the working group of advanced nuclear reactor types of IAHR (International Association of Hydraulic Research). The test is well documented and detailed measuring results are available. The test aims at the investigation of phenomena, which are important for heat removal at natural circulation conditions in a nuclear reactor. The task for the calculation was the modelling of the forced flow field of a single phase incompressible fluid with consideration of heat transfer and influence of gravity. These phenomena are typical also for other industrial processes. The importance of correct modelling of these phenomena also for other applications is a motivation for performing these calculations. (orig.)
Wignall, Jessica A.; Shapiro, Andrew J.; Wright, Fred A.; Woodruff, Tracey J.; Chiu, Weihsueh A.; Guyton, Kathryn Z.
2014-01-01
Background: Benchmark dose (BMD) modeling computes the dose associated with a prespecified response level. While offering advantages over traditional points of departure (PODs), such as no-observed-adverse-effect-levels (NOAELs), BMD methods have lacked consistency and transparency in application, interpretation, and reporting in human health assessments of chemicals. Objectives: We aimed to apply a standardized process for conducting BMD modeling to reduce inconsistencies in model fitting and selection. Methods: We evaluated 880 dose–response data sets for 352 environmental chemicals with existing human health assessments. We calculated benchmark doses and their lower limits [10% extra risk, or change in the mean equal to 1 SD (BMD/L10/1SD)] for each chemical in a standardized way with prespecified criteria for model fit acceptance. We identified study design features associated with acceptable model fits. Results: We derived values for 255 (72%) of the chemicals. Batch-calculated BMD/L10/1SD values were significantly and highly correlated (R2 of 0.95 and 0.83, respectively, n = 42) with PODs previously used in human health assessments, with values similar to reported NOAELs. Specifically, the median ratio of BMDs10/1SD:NOAELs was 1.96, and the median ratio of BMDLs10/1SD:NOAELs was 0.89. We also observed a significant trend of increasing model viability with increasing number of dose groups. Conclusions: BMD/L10/1SD values can be calculated in a standardized way for use in health assessments on a large number of chemicals and critical effects. This facilitates the exploration of health effects across multiple studies of a given chemical or, when chemicals need to be compared, providing greater transparency and efficiency than current approaches. Citation: Wignall JA, Shapiro AJ, Wright FA, Woodruff TJ, Chiu WA, Guyton KZ, Rusyn I. 2014. Standardizing benchmark dose calculations to improve science-based decisions in human health assessments. Environ Health
Benchmark calculation of APOLLO2 and SLAROM-UF in a fast reactor lattice
International Nuclear Information System (INIS)
Hazama, Taira
2009-10-01
A lattice cell benchmark calculation is carried out for APOLLO2 and SLAROM-UF on the infinite lattice of a simple pin cell featuring a fast reactor. The accuracy in k-infinity and reaction rates is investigated in their reference and standard level calculations. In the 1st reference level calculation, APOLLO2 and SLAROM-UF agree with the reference value of k-infinity obtained by a continuous energy Monte Carlo calculation within 50 pcm. However, larger errors are observed in a particular reaction rate and energy range. A major problem common to both codes is in the cross section library of 239 Pu in the unresolved energy range. In the 2nd reference level calculation, which is based on the ECCO 1968 group structure, both results of k-infinity agree with the reference value within 100 pcm. The resonance overlap effect is observed by several percents in cross sections of heavy nuclides. In the standard level calculation based on the APOLLO2 library creation methodology, a discrepancy appears by more than 300 pcm. A restriction is revealed in APOLLO2. Its standard cross section library does not have a sufficiently small background cross section to evaluate the self-shielding effect of 56 Fe cross sections. The restriction can be removed by introducing the mixture self-shielding treatment recently introduced to APOLLO2. SLAROM-UF original standard level calculation based on the JFS-3 library creation methodology is the best among the standard level calculations. Improvement from the SLAROM-UF standard level calculation is achieved mainly by use of a proper weight function for light or intermediate nuclides. (author)
Benchmark calculation of APOLLO-2 and SLAROM-UF in a fast reactor lattice
International Nuclear Information System (INIS)
Hazama, T.
2009-07-01
A lattice cell benchmark calculation is carried out for APOLLO2 and SLAROM-UF on the infinite lattice of a simple pin cell featuring a fast reactor. The accuracy in k-infinity and reaction rates is investigated in their reference and standard level calculations. In the 1. reference level calculation, APOLLO2 and SLAROM-UF agree with the reference value of k-infinity obtained by a continuous energy Monte Carlo calculation within 50 pcm. However, larger errors are observed in a particular reaction rate and energy range. The major problem common to both codes is in the cross section library of 239 Pu in the unresolved energy range. In the 2. reference level calculation, which is based on the ECCO 1968 group structure, both results of k-infinity agree with the reference value within 100 pcm. The resonance overlap effect is observed by several percents in cross sections of heavy nuclides. In the standard level calculation based on the APOLLO2 library creation methodology, a discrepancy appears by more than 300 pcm. A restriction is revealed in APOLLO2. Its standard cross section library does not have a sufficiently small background cross section to evaluate the self shielding effect on 56 Fe cross sections. The restriction can be removed by introducing the mixture self-shielding treatment recently introduced to APOLLO2. SLAROM-UF original standard level calculation based on the JFS-3 library creation methodology is the best among the standard level calculations. Improvement from the SLAROM-UF standard level calculation is achieved mainly by use of a proper weight function for light or intermediate nuclides. (author)
Energy Technology Data Exchange (ETDEWEB)
Hovi, Ville; Taivassalo, Veikko; Haemaelaeinen, Anitta; Raety, Hanna; Syrjaelahti, Elina [VTT Technical Research Centre of Finland Ltd, VTT (Finland)
2017-09-15
The 7{sup th} dynamic AER benchmark is the first in which three-dimensional thermal hydraulics codes are supposed to be applied. The aim is to get a more precise core inlet temperature profile than the sector temperatures available typically with system codes. The benchmark consists of a start-up of the sixth, isolated loop in a VVER-440 plant. The isolated loop initially contains cold water without boric acid and the start-up leads to a somewhat asymmetrical core power increase due to feedbacks in the core. In this study, the 7{sup th} AER benchmark is calculated with the three-dimensional nodal reactor dynamics code HEXTRAN-SMABRE coupled with the porous computational fluid dynamics code PORFLO. These three codes are developed at VTT. A novel two-way coupled simulation of the 7{sup th} AER benchmark was performed successfully demonstrating the feasibility and advantages of the new reactor analysis framework. The modelling issues for this benchmark are reported and some evaluation against the previously reported comparisons between the system codes is provided.
Start-up of a cold loop in a VVER-440, the 7th AER benchmark calculation with HEXTRAN-SMABRE-PORFLO
International Nuclear Information System (INIS)
Hovi, Ville; Taivassalo, Veikko; Haemaelaeinen, Anitta; Raety, Hanna; Syrjaelahti, Elina
2017-01-01
The 7 th dynamic AER benchmark is the first in which three-dimensional thermal hydraulics codes are supposed to be applied. The aim is to get a more precise core inlet temperature profile than the sector temperatures available typically with system codes. The benchmark consists of a start-up of the sixth, isolated loop in a VVER-440 plant. The isolated loop initially contains cold water without boric acid and the start-up leads to a somewhat asymmetrical core power increase due to feedbacks in the core. In this study, the 7 th AER benchmark is calculated with the three-dimensional nodal reactor dynamics code HEXTRAN-SMABRE coupled with the porous computational fluid dynamics code PORFLO. These three codes are developed at VTT. A novel two-way coupled simulation of the 7 th AER benchmark was performed successfully demonstrating the feasibility and advantages of the new reactor analysis framework. The modelling issues for this benchmark are reported and some evaluation against the previously reported comparisons between the system codes is provided.
Comparison of TRAC calculations with experimental data
International Nuclear Information System (INIS)
Jackson, J.F.; Vigil, J.C.
1980-01-01
TRAC is an advanced best-estimate computer code for analyzing postulated accidents in light water reactors. This paper gives a brief description of the code followed by comparisons of TRAC calculations with data from a variety of separate-effects, system-effects, and integral experiments. Based on these comparisons, the capabilities and limitations of the early versions of TRAC are evaluated
Comparison of methods for calculating water erosion
SVOBODOVÁ, Pavlína
2011-01-01
Bachelor thesis presents a comparison of methods for calculating water erosion. The aim is to summarize available evidence concerning the problems of water erosion. There are presented some methods how to calculate average annual erosion of soils, and selected models for calculating the erosion immediately. There are also listed possible erosion control measures through which we can at least slow the effects of erosion, rather than stop completely.
Benchmark calculations of present-day instantaneous radiative forcing in clear, aerosol-free skies
Pincus, Robert; Evans, K. Franklin; Manners, James; Paynter, David; Mlawer, Eli
2017-04-01
At the root of the effective radiative forcing driving climate change is the change in radiative flux at the top of the atmosphere due to changes in atmospheric composition - the so-called "instantaneous radiative forcing" (IRF). Estimates of global mean present-day instantaneous radiative forcing under cloud- and aerosol-free conditions show surprising diversity given the level of understanding of spectroscopy and radiative transfer. Much of this diversity, especially in estimates from climate models, is artificial, reflecting only differing errors and approximations in radiative transfer parameterizations. Calculations with more accurate line-by-line models have been considered far too expensive to be practical on a global scale. We report here on benchmark calculations of present-day instantaneous radiative forcing by greenhouse gases in the absence of clouds and aerosols made with very high spectral-resolution models. The problem is made computationally practical by defining a set of roughly 100 atmospheric profiles and associated weights obtained from present-day atmospheric conditions as represented by reanalysis via simulated annealing to reproduce global- and regional-mean fluxes with sampling errors of less than 1% (verified by cross-validation with independent radiative transfer models). Cloud- and aerosol-free IRF is then computed from these profiles using present-day and pre-industrial greenhouse gas concentrations. We report on results from two line-by-line and one high-resolution k-distribution model.
Comparison of RESRAD with hand calculations
International Nuclear Information System (INIS)
Rittmann, P.D.
1995-09-01
This report is a continuation of an earlier comparison done with two other computer programs, GENII and PATHRAE. The dose calculations by the two programs were compared with each other and with hand calculations. These band calculations have now been compared with RESRAD Version 5.41 to examine the use of standard models and parameters in this computer program. The hand calculations disclosed a significant computational error in RESRAD. The Pu-241 ingestion doses are five orders of magnitude too small. In addition, the external doses from some nuclides differ greatly from expected values. Both of these deficiencies have been corrected in later versions of RESRAD
Attila calculations for the 3-D C5G7 benchmark extension
International Nuclear Information System (INIS)
Wareing, T.A.; McGhee, J.M.; Barnett, D.A.; Failla, G.A.
2005-01-01
The performance of the Attila radiation transport software was evaluated for the 3-D C5G7 MOX benchmark extension, a follow-on study to the MOX benchmark developed by the 'OECD/NEA Expert Group on 3-D Radiation Transport Benchmarks'. These benchmarks were designed to test the ability of modern deterministic transport methods to model reactor problems without spatial homogenization. Attila is a general purpose radiation transport software package with an integrated graphical user interface (GUI) for analysis, set-up and postprocessing. Attila provides solutions to the discrete-ordinates form of the linear Boltzmann transport equation on a fully unstructured, tetrahedral mesh using linear discontinuous finite-element spatial differencing in conjunction with diffusion synthetic acceleration of inner iterations. The results obtained indicate that Attila can accurately solve the benchmark problem without spatial homogenization. (authors)
JNC results of BFS-62-3A benchmark calculation (CRP: Phase 5)
International Nuclear Information System (INIS)
Ishikawa, M.
2004-01-01
The present work is the results of JNC, Japan, for the Phase 5 of IAEA CRP benchmark problem (BFS-62-3A critical experiment). Analytical Method of JNC is based on Nuclear Data Library JENDL-3.2; Group Constant Set JFS-3-J3.2R: 70-group, ABBN-type self-shielding factor table based on JENDL-3.2; Effective Cross-section - Current-weighted multigroup transport cross-section. Cell model for the BFS as-built tube and pellets was (Case 1) Homogeneous Model based on IPPE definition; (Case 2) Homogeneous atomic density equivalent to JNC's heterogeneous calculation only to cross-check the adjusted correction factors; (Case 3) Heterogeneous model based on JNC's evaluation, One-dimensional plate-stretch model with Tone's background cross-section method (CASUP code). Basic diffusion Calculation was done in 18-groups and three-dimensional Hex-Z model (by the CITATION code), with Isotropic diffusion coefficients (Case 1 and 2), and Benoist's anisotropic diffusion coefficients (Case 3). For sodium void reactivity, the exact perturbation theory was applied both to basic calculation and correction calculations, ultra-fine energy group correction - approx. 100,000 group constants below 50 keV, and ABBN-type 175 group constants with shielding factors above 50 keV. Transport theory and mesh size correction 18-group, was used for three-dimensional Hex-Z model (the MINIHEX code based on the S4-P0 transport method, which was developed by JNC. Effective delayed Neutron fraction in the reactivity scale was fixed at 0.00623 by IPPE evaluation. Analytical Results of criticality values and sodium void reactivity coefficient obtained by JNC are presented. JNC made a cross-check of the homogeneous model and the adjusted correction factors submitted by IPPE, and confirmed they are consistent. JNC standard system showed quite satisfactory analytical results for the criticality and the sodium void reactivity of BFS-62-3A experiment. JNC calculated the cross-section sensitivity coefficients of BFS
Quantum computing applied to calculations of molecular energies: CH2 benchmark.
Veis, Libor; Pittner, Jiří
2010-11-21
Quantum computers are appealing for their ability to solve some tasks much faster than their classical counterparts. It was shown in [Aspuru-Guzik et al., Science 309, 1704 (2005)] that they, if available, would be able to perform the full configuration interaction (FCI) energy calculations with a polynomial scaling. This is in contrast to conventional computers where FCI scales exponentially. We have developed a code for simulation of quantum computers and implemented our version of the quantum FCI algorithm. We provide a detailed description of this algorithm and the results of the assessment of its performance on the four lowest lying electronic states of CH(2) molecule. This molecule was chosen as a benchmark, since its two lowest lying (1)A(1) states exhibit a multireference character at the equilibrium geometry. It has been shown that with a suitably chosen initial state of the quantum register, one is able to achieve the probability amplification regime of the iterative phase estimation algorithm even in this case.
International Nuclear Information System (INIS)
Washby, V.
1987-01-01
This report is the subject of a study contract sponsored by the containment loading and response group (CONT), a sub-group of the safety working group of the fast reactor co-ordinating committee - CEC. The analysises provided here will form part of a final report on containment codes, sensitivity analysis, and benchmark comparison, performed by the group in recent years. The contribution of this study contract is to assess the six different containment codes, used in the benchmark comparison, with regard to their procedures and methods, and also to provide an assessment of their benchmark calculation results, so that an overall assessment of their effectiveness for use in containment problems can be made. Each code description, which has been provided by the relevant user, contains a large amount of detailed information and a large number of equations, which would be unwieldy to reproduce and probably unnecessary. For this reason the report has concentrated on a fuller description of the SEURBNUK code, this being the code most familiar to the author, and other code descriptions have concentrated on noting variations and differences. Also, the code SEURBNUK/EURDYN has been used for the sensitivity analysis, this code being an extension of the original code SEURBNUK with the addition of axi-symmetric finite element capabilities. The six containment codes described and assessed in this report are those which were being actively used within the European community at the time
A thermo mechanical benchmark calculation of a hexagonal can in the BTI accident with INCA code
International Nuclear Information System (INIS)
Zucchini, A.
1988-01-01
The thermomechanical behaviour of an hexagonal can in a benchmark problem (simulating the conditions of a BTI accident in a fuel assembly) is examined by means of the INCA code and the results systematically compared with those of ADINA
BRASERO: A Resource for Benchmarking RNA Secondary Structure Comparison Algorithms
Directory of Open Access Journals (Sweden)
Julien Allali
2012-01-01
Full Text Available The pairwise comparison of RNA secondary structures is a fundamental problem, with direct application in mining databases for annotating putative noncoding RNA candidates in newly sequenced genomes. An increasing number of software tools are available for comparing RNA secondary structures, based on different models (such as ordered trees or forests, arc annotated sequences, and multilevel trees and computational principles (edit distance, alignment. We describe here the website BRASERO that offers tools for evaluating such software tools on real and synthetic datasets.
Comparison of cardiovascular disease risk calculators.
Allan, G Michael; Garrison, Scott; McCormack, James
2014-08-01
The cardiovascular benefit of many preventive interventions (like statins) is strongly dependent on the baseline cardiovascular risk of the patient. Many lipid and vascular primary prevention guidelines advocate for the use of cardiovascular risk calculators. There are over 100 cardiovascular risk prediction models, and some of these models have spawned scores of calculators. Only about 25 of these models/calculators have been externally validated. The ability to identify who will have events frequently varies little (calculators is common with one in three paired comparisons disagreeing on risk category. In part, this disagreement is because calculators vary according to the database they are derived from, choice of clinical endpoints and risk interval duration upon which the estimate is based. Additional risk factors do little to improve the basic risk predictions performance, except perhaps coronary artery calcium which still requires further study before regular use. The estimates provided by cardiovascular risk calculators are ballpark approximations and have a margin of error. Physicians should use models derived from, or calibrated for, populations similar to theirs and understand the endpoints, duration, and special features of their selected calculator.
Energy Technology Data Exchange (ETDEWEB)
Le, T.T.
1991-09-01
This report concerns the verification and validation of GILDA, a static two dimensional infinite lattice diffusion theory code. The verification was performed to determine if GILDA was applying the correct theory and that all the subroutines function as required. The validation was performed to determine the accuracy of the code by comparing the results of the code with the integral transport solutions (GLASS) of benchmark problems. Since GLASS uses multigroup integral transport theory, a more accurate method than fewgroup diffusion theory, using solutions from GLASS as reference solutions to benchmark GILDA is acceptable. Eight benchmark problems used in this process are infinite mixed lattice problems. The lattice is constructed by repeating an infinite number of identical super-cells (zones). Two types of super-cell have been used for these benchmark problems: one consists of six Mark22 assemblies surrounding one control assembly and the other consists of three Markl6 fuel assemblies and three Mark31 target assemblies surrounding a control assembly.
Rooftop Unit Comparison Calculator User Manual
Energy Technology Data Exchange (ETDEWEB)
Miller, James D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-04-30
This document serves as a user manual for the Packaged rooftop air conditioners and heat pump units comparison calculator (RTUCC) and is an aggregation of the calculator’s website documentation. Content ranges from new-user guide material like the “Quick Start” to the more technical/algorithmic descriptions of the “Methods Pages.” There is also a section listing all the context-help topics that support the features on the “Controls” page. The appendix has a discussion of the EnergyPlus runs that supported the development of the building-response models.
Directory of Open Access Journals (Sweden)
Kulesza Joel A.
2016-01-01
Full Text Available This paper describes a comparison of contemporary and historical light water reactor shielding and pressure vessel dosimetry cross-section libraries for a boiling water reactor calculational benchmark problem. The calculational benchmark problem was developed at Brookhaven National Laboratory by the request of the U. S. Nuclear Regulatory Commission. The benchmark problem was originally evaluated by Brookhaven National Laboratory using the Oak Ridge National Laboratory discrete ordinates code DORT and the BUGLE-93 cross-section library. In this paper, the Westinghouse RAPTOR-M3G three-dimensional discrete ordinates code was used. A variety of cross-section libraries were used with RAPTOR-M3G including the BUGLE93, BUGLE-96, and BUGLE-B7 cross-section libraries developed at Oak Ridge National Laboratory and ALPAN-VII.0 developed at Westinghouse. In comparing the calculated fast reaction rates using the four aforementioned cross-section libraries in the pressure vessel capsule, for six dosimetry reaction rates, a maximum relative difference of 8% was observed. As such, it is concluded that the results calculated by RAPTOR-M3G are consistent with the benchmark and further that the different vintage BUGLE cross-section libraries investigated are largely self-consistent.
Baum, Heinz-Georg; Schuch, Dieter
2017-12-01
Benchmarking is a proven and widely used business tool for identifying best practice. To produce robust results, the objects of comparison used in benchmarking analysis need to be structurally comparable and distorting factors need to be eliminated. We focus on a specific example - a benchmark study commissioned by the European Commission's Directorate-General for Environment on the implementation of Extended Producer Responsibility (EPR) for packaging at the national level - to discuss potential distorting factors and take them into account in the calculation. The cost of compliance per inhabitant and year, which is used as the key cost efficiency indicator in the study, is adjusted to take account of seven factors. The results clearly show that differences in performance may play a role, but the (legal) implementation of EPR - which is highly heterogeneous across countries - is the single most important cost determinant and must be taken into account to avoid misinterpretation and false conclusions.
Energy Technology Data Exchange (ETDEWEB)
Ivanova, T.; Laville, C. [Institut de Radioprotection et de Surete Nucleaire IRSN, BP 17, 92262 Fontenay aux Roses (France); Dyrda, J. [Atomic Weapons Establishment AWE, Aldermaston, Reading, RG7 4PR (United Kingdom); Mennerdahl, D. [E Mennerdahl Systems EMS, Starvaegen 12, 18357 Taeby (Sweden); Golovko, Y.; Raskach, K.; Tsiboulia, A. [Inst. for Physics and Power Engineering IPPE, 1, Bondarenko sq., 249033 Obninsk (Russian Federation); Lee, G. S.; Woo, S. W. [Korea Inst. of Nuclear Safety KINS, 62 Gwahak-ro, Yuseong-gu, Daejeon 305-338 (Korea, Republic of); Bidaud, A.; Sabouri, P. [Laboratoire de Physique Subatomique et de Cosmologie LPSC, CNRS-IN2P3/UJF/INPG, Grenoble (France); Patel, A. [U.S. Nuclear Regulatory Commission (NRC), Washington, DC 20555-0001 (United States); Bledsoe, K.; Rearden, B. [Oak Ridge National Laboratory ORNL, M.S. 6170, P.O. Box 2008, Oak Ridge, TN 37831 (United States); Gulliford, J.; Michel-Sendis, F. [OECD/NEA, 12, Bd des Iles, 92130 Issy-les-Moulineaux (France)
2012-07-01
The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)
DEFF Research Database (Denmark)
Murata, Katsuyuki; Budtz-Jørgensen, Esben; Grandjean, Philippe
2002-01-01
Methylmercury; benchmark dose; brainstem auditory evoked potentials; neurotoxicity; human health risk assessment......Methylmercury; benchmark dose; brainstem auditory evoked potentials; neurotoxicity; human health risk assessment...
International Nuclear Information System (INIS)
Preumont, A.; Shilab, S.; Cornaggia, L.; Reale, M.; Labbe, P.; Noe, H.
1992-01-01
This benchmark exercise is the continuation of the state-of-the-art review (EUR 11369 EN) which concluded that the random vibration approach could be an effective tool in seismic analysis of nuclear power plants, with potential advantages on time history and response spectrum techniques. As compared to the latter, the random vibration method provides an accurate treatment of multisupport excitations, non classical damping as well as the combination of high-frequency modal components. With respect to the former, the random vibration method offers direct information on statistical variability (probability distribution) and cheaper computations. The disadvantages of the random vibration method are that it is based on stationary results, and requires a power spectral density input instead of a response spectrum. A benchmark exercise to compare the three methods from the various aspects mentioned above, on one or several simple structures has been made. The following aspects have been covered with the simplest possible models: (i) statistical variability, (ii) multisupport excitation, (iii) non-classical damping. The random vibration method is therefore concluded to be a reliable method of analysis. Its use is recommended, particularly for preliminary design, owing to its computational advantage on multiple time history analysis
Energy Technology Data Exchange (ETDEWEB)
Kaneko, Masashi [Japan Atomic Energy Agency, Nuclear Science and Engineering Center (Japan); Yasuhara, Hiroki; Miyashita, Sunao; Nakashima, Satoru, E-mail: snaka@hiroshima-u.ac.jp [Hiroshima University, Graduate School of Science (Japan)
2017-11-15
The present study applies all-electron relativistic DFT calculation with Douglas-Kroll-Hess (DKH) Hamiltonian to each ten sets of Ru and Os compounds. We perform the benchmark investigation of three density functionals (BP86, B3LYP and B2PLYP) using segmented all-electron relativistically contracted (SARC) basis set with the experimental Mössbauer isomer shifts for {sup 99}Ru and {sup 189}Os nuclides. Geometry optimizations at BP86 theory of level locate the structure in a local minimum. We calculate the contact density to the wavefunction obtained by a single point calculation. All functionals show the good linear correlation with experimental isomer shifts for both {sup 99}Ru and {sup 189}Os. Especially, B3LYP functional gives a stronger correlation compared to BP86 and B2PLYP functionals. The comparison of contact density between SARC and well-tempered basis set (WTBS) indicated that the numerical convergence of contact density cannot be obtained, but the reproducibility is less sensitive to the choice of basis set. We also estimate the values of ΔR/R, which is an important nuclear constant, for {sup 99}Ru and {sup 189}Os nuclides by using the benchmark results. The sign of the calculated ΔR/R values is consistent with the predicted data for {sup 99}Ru and {sup 189}Os. We obtain computationally the ΔR/R values of {sup 99}Ru and {sup 189}Os (36.2 keV) as 2.35×10{sup −4} and −0.20×10{sup −4}, respectively, at B3LYP level for SARC basis set.
Results of a 3D-EM-Code Comparison on the TRISPAL Cavity Benchmark
Balleyguier, P
2004-01-01
Several 3D electromagnetic codes (MAFIA, CST MicroWave-Studio, Vector-Fields Soprano, Ansoft HFSS, SLAC Omega3P) have been tested on a 2-cell cavity benchmark. Computed frequencies and Q-factors were compared to experimental values measured on a mock-up, putting the emphasis on the effect of coupling slots. It comes out that MAFIA limitations due to the staircase approximation is overcome by all other codes, but some differences still remain for losses calculations in re-entrant corners
Jansky, Bohumil; Rejchrt, Jiri; Novak, Evzen; Losa, Evzen; Blokhin, Anatoly I.; Mitenkova, Elena
2017-09-01
The leakage neutron spectra measurements have been done on benchmark spherical assemblies - iron spheres with diameter of 20, 30, 50 and 100 cm. The Cf-252 neutron source was placed into the centre of iron sphere. The proton recoil method was used for neutron spectra measurement using spherical hydrogen proportional counters with diameter of 4 cm and with pressure of 400 and 1000 kPa. The neutron energy range of spectrometer is from 0.1 to 1.3 MeV. This energy interval represents about 85 % of all leakage neutrons from Fe sphere of diameter 50 cm and about of 74% for Fe sphere of diameter 100 cm. The adequate MCNP neutron spectra calculations based on data libraries CIELO, JEFF-3.2 and ENDF/B-VII.1 were done. Two calculations were done with CIELO library. The first one used data for all Fe-isotopes from CIELO and the second one (CIELO-56) used only Fe-56 data from CIELO and data for other Fe isotopes were from ENDF/B-VII.1. The energy structure used for calculations and measurements was 40 gpd (groups per decade) and 200 gpd. Structure 200 gpd represents lethargy step about of 1%. This relatively fine energy structure enables to analyze the Fe resonance neutron energy structure. The evaluated cross section data of Fe were validated on comparisons between the calculated and experimental spectra.
Development of the Croatian HR Benchmarks List and its Comparison with the World-Approved Ones
Pološki Vokić, Nina; Vidović, Maja
2004-01-01
Human resource benchmarking has become increasingly important as organizations strive for better performance. Observing, adapting and reapplying best HR practices from others became the essential management tool. The article defines HR benchmarks appropriate and significant for the Croatian business environment, which were predominantly compensation indicators. In particular, the research revealed that Croatian HR benchmarks are different from HR benchmarks used in developed countries. Namely...
Directory of Open Access Journals (Sweden)
Tanaka Ken-ichi
2016-01-01
Full Text Available We performed benchmark calculation for radioactivity activated in a Primary Containment Vessel (PCV of a Boiling Water Reactor (BWR by using MAXS library, which was developed by collapsing with neutron energy spectra in the PCV of the BWR. Radioactivities due to neutron irradiation were measured by using activation foil detector of Gold (Au and Nickel (Ni at thirty locations in the PCV. We performed activation calculations of the foils with SCALE5.1/ORIGEN-S code with irradiation conditions of each foil location as the benchmark calculation. We compared calculations and measurements to estimate an effectiveness of MAXS library.
Optical rotation calculated with time-dependent density functional theory: the OR45 benchmark.
Srebro, Monika; Govind, Niranjan; de Jong, Wibe A; Autschbach, Jochen
2011-10-13
Time-dependent density functional theory (TDDFT) computations are performed for 42 organic molecules and three transition metal complexes, with experimental molar optical rotations ranging from 2 to 2 × 10(4) deg cm(2) dmol(-1). The performances of the global hybrid functionals B3LYP, PBE0, and BHLYP, and of the range-separated functionals CAM-B3LYP and LC-PBE0 (the latter being fully long-range corrected), are investigated. The performance of different basis sets is studied. When compared to liquid-phase experimental data, the range-separated functionals do, on average, not perform better than B3LYP and PBE0. Median relative deviations between calculations and experiment range from 25 to 29%. A basis set recently proposed for optical rotation calculations (LPol-ds) on average does not give improved results compared to aug-cc-pVDZ in TDDFT calculations with B3LYP. Individual cases are discussed in some detail, among them norbornenone for which the LC-PBE0 functional produced an optical rotation that is close to available data from coupled-cluster calculations, but significantly smaller in magnitude than the liquid-phase experimental value. Range-separated functionals and BHLYP perform well for helicenes and helicene derivatives. Metal complexes pose a challenge to first-principles calculations of optical rotation.
Han, Jeong-Hwan; Oda, Takuji
2018-04-01
The performance of exchange-correlation functionals in density-functional theory (DFT) calculations for liquid metal has not been sufficiently examined. In the present study, benchmark tests of Perdew-Burke-Ernzerhof (PBE), Armiento-Mattsson 2005 (AM05), PBE re-parameterized for solids, and local density approximation (LDA) functionals are conducted for liquid sodium. The pair correlation function, equilibrium atomic volume, bulk modulus, and relative enthalpy are evaluated at 600 K and 1000 K. Compared with the available experimental data, the errors range from -11.2% to 0.0% for the atomic volume, from -5.2% to 22.0% for the bulk modulus, and from -3.5% to 2.5% for the relative enthalpy depending on the DFT functional. The generalized gradient approximation functionals are superior to the LDA functional, and the PBE and AM05 functionals exhibit the best performance. In addition, we assess whether the error tendency in liquid simulations is comparable to that in solid simulations, which would suggest that the atomic volume and relative enthalpy performances are comparable between solid and liquid states but that the bulk modulus performance is not. These benchmark test results indicate that the results of liquid simulations are significantly dependent on the exchange-correlation functional and that the DFT functional performance in solid simulations can be used to roughly estimate the performance in liquid simulations.
Energy Technology Data Exchange (ETDEWEB)
Okuno, Hiroshi; Naito, Yoshitaka; Suyama, Kenya [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
2002-02-01
The report describes the final results of the Phase IIIB Benchmark conducted by the Expert Group on Burnup Credit Criticality Safety under the auspices of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD). The Benchmark was intended to compare the predictability of current computer code and data library combinations for the atomic number densities of an irradiated PWR fuel assembly model. The fuel assembly was irradiated under specific power of 25.6 MW/tHM up to 40 GWd/tHM and cooled for five years. The void fraction was assumed to be uniform throughout the channel box and constant, at 0, 40 and 70%, during burnup. In total, 16 results were submitted from 13 institutes of 7 countries. The calculated atomic number densities of 12 actinides and 20 fission product nuclides were found to be for the most part within a range of {+-}10% relative to the average, although some results, esp. {sup 155}Eu and gadolinium isotopes, exceeded the band, which will require further investigation. Pin-wise burnup results agreed well among the participants. The results in the infinite neutron multiplication factor k{sub {infinity}} also accorded well with each other for void fractions of 0 and 40%; however some results deviated from the averaged value noticeably for the void fraction of 70%. (author)
International Nuclear Information System (INIS)
Okuno, Hiroshi; Naito, Yoshitaka; Suyama, Kenya
2002-02-01
The report describes the final results of the Phase IIIB Benchmark conducted by the Expert Group on Burnup Credit Criticality Safety under the auspices of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD). The Benchmark was intended to compare the predictability of current computer code and data library combinations for the atomic number densities of an irradiated PWR fuel assembly model. The fuel assembly was irradiated under specific power of 25.6 MW/tHM up to 40 GWd/tHM and cooled for five years. The void fraction was assumed to be uniform throughout the channel box and constant, at 0, 40 and 70%, during burnup. In total, 16 results were submitted from 13 institutes of 7 countries. The calculated atomic number densities of 12 actinides and 20 fission product nuclides were found to be for the most part within a range of ±10% relative to the average, although some results, esp. 155 Eu and gadolinium isotopes, exceeded the band, which will require further investigation. Pin-wise burnup results agreed well among the participants. The results in the infinite neutron multiplication factor k ∞ also accorded well with each other for void fractions of 0 and 40%; however some results deviated from the averaged value noticeably for the void fraction of 70%. (author)
International Nuclear Information System (INIS)
Li, M; Chetty, I; Zhong, H
2014-01-01
Purpose: Tumor control probability (TCP) calculated with accumulated radiation doses may help design appropriate treatment margins. Image registration errors, however, may compromise the calculated TCP. The purpose of this study is to develop benchmark CT images to quantify registration-induced errors in the accumulated doses and their corresponding TCP. Methods: 4DCT images were registered from end-inhale (EI) to end-exhale (EE) using a “demons” algorithm. The demons DVFs were corrected by an FEM model to get realistic deformation fields. The FEM DVFs were used to warp the EI images to create the FEM-simulated images. The two images combined with the FEM DVF formed a benchmark model. Maximum intensity projection (MIP) images, created from the EI and simulated images, were used to develop IMRT plans. Two plans with 3 and 5 mm margins were developed for each patient. With these plans, radiation doses were recalculated on the simulated images and warped back to the EI images using the FEM DVFs to get the accumulated doses. The Elastix software was used to register the FEM-simulated images to the EI images. TCPs calculated with the Elastix-accumulated doses were compared with those generated by the FEM to get the TCP error of the Elastix registrations. Results: For six lung patients, the mean Elastix registration error ranged from 0.93 to 1.98 mm. Their relative dose errors in PTV were between 0.28% and 6.8% for 3mm margin plans, and between 0.29% and 6.3% for 5mm-margin plans. As the PTV margin reduced from 5 to 3 mm, the mean TCP error of the Elastix-reconstructed doses increased from 2.0% to 2.9%, and the mean NTCP errors decreased from 1.2% to 1.1%. Conclusion: Patient-specific benchmark images can be used to evaluate the impact of registration errors on the computed TCPs, and may help select appropriate PTV margins for lung SBRT patients
2011-01-01
The adsorption of Ag, Au, and Pd atoms on benzene, coronene, and graphene has been studied using post Hartree–Fock wave function theory (CCSD(T), MP2) and density functional theory (M06-2X, DFT-D3, PBE, vdW-DF) methods. The CCSD(T) benchmark binding energies for benzene–M (M = Pd, Au, Ag) complexes are 19.7, 4.2, and 2.3 kcal/mol, respectively. We found that the nature of binding of the three metals is different: While silver binds predominantly through dispersion interactions, the binding of palladium has a covalent character, and the binding of gold involves a subtle combination of charge transfer and dispersion interactions as well as relativistic effects. We demonstrate that the CCSD(T) benchmark binding energies for benzene–M complexes can be reproduced in plane-wave density functional theory calculations by including a fraction of the exact exchange and a nonempirical van der Waals correction (EE+vdW). Applying the EE+vdW method, we obtained binding energies for the graphene–M (M = Pd, Au, Ag) complexes of 17.4, 5.6, and 4.3 kcal/mol, respectively. The trends in binding energies found for the benzene–M complexes correspond to those in coronene and graphene complexes. DFT methods that use empirical corrections to account for the effects of vdW interactions significantly overestimate binding energies in some of the studied systems. PMID:22076121
Comparison of Two Toric IOL Calculation Methods
Directory of Open Access Journals (Sweden)
C. Kern
2018-01-01
Full Text Available Purpose. To compare two calculators for toric intraocular lens (IOL calculation and to evaluate the prediction of refractive outcome. Methods. Sixty-four eyes of forty-five patients underwent cataract surgery followed by implantation of a toric intraocular lens (Zeiss Torbi 709 M calculated by a standard industry calculator using front keratometry values. Prediction error, median absolute error, and refractive astigmatism error were evaluated for the standard calculator. The predicted postoperative refraction and toric lens power values were evaluated and compared after postoperative recalculation using the Barrett calculator. Results. We observed a significant undercorrection in the spherical equivalent (0.19 D by using a standard calculator (p≤0.05. According to the Baylor nomogram and the refractive influence of posterior corneal astigmatism (PCA, undercorrection of the cylinder was lower for patients with WTR astigmatism, because of the tendency of overcorrection. An advantage of less residual postoperative SE, sphere, and cylinder for the Barrett calculator was observed when retrospectively comparing the calculated predicted postoperative refraction between calculators (p≤0.01. Conclusion. Consideration of only corneal front keratometric values for toric lens calculation may lead to postoperative undercorrection of astigmatism. The prediction of postoperative refractive outcome can be improved by using appropriate methods of adjustment in order to take PCA into account.
HEXTRAN-SMABRE calculation of the 6th AER Benchmark, main steam line break in a WWER-440 NPP
International Nuclear Information System (INIS)
Haemaelaeinen, A.; Kyrki-Rajamaeki, R.
2003-01-01
The sixth AER benchmark is the second AER benchmark for couplings of the thermal hydraulic codes and three dimensional neutron kinetic core models. It concerns a double end break of one main steam line in a WWER-440 plant. The core is at the end of its first cycle in full power conditions. In VTT HEXTRAN2.9 is used for the core kinetics and dynamics and SMABRE4.8 as a thermal hydraulic model for the primary and secondary loop. The plant model for SMABRE consists mainly of two input models, Loviisa model and a standard WWER-440/213 plant model. The primary loop includes six separate loops, the pressure vessel is divided into six parallel channels in SMABRE and the whole core calculation is performed in the core with HEXTRAN. The horizontal steam generators are modelled with heat transfer tubes in five levels and vertically with two parts, riser and downcomer. With this kind of detailed modelling of steam generators there occurs strong flashing after break opening. As a sequence of the main steam line break at nominal power level, the reactor trip is followed quite soon. The liquid temperature continues to decrease in one core inlet sector which may lead to recriticality and neuron power increase. The situation is very sensitive to small changes in the steam generator and break flow modelling and therefore several sensitivity calculations have been done. Also two stucked control rods have been assumed. Due to boric acid concentration in the high pressure safety injection subcriticality is finally guaranteed in the transient (Authors)
Benchmark calculation of SCALE-PC 4.3 CSAS6 module and burnup credit criticality analysis
Energy Technology Data Exchange (ETDEWEB)
Shin, Hee Sung; Ro, Seong Gy; Shin, Young Joon; Kim, Ik Soo [Korea Atomic Energy Research Institute, Taejon (Korea)
1998-12-01
Calculation biases of SCALE-PC CSAS6 module for PWR spent fuel, metallized spent fuel and solution of nuclear materials have been determined on the basis of the benchmark to be 0.01100, 0.02650 and 0.00997, respectively. With the aid of the code system, nuclear criticality safety analysis for the spent fuel storage pool has been carried out to determine the minimum burnup of spent fuel required for safe storage. The criticality safety analysis is performed using three types of isotopic composition of spent fuel: ORIGEN2-calculated isotopic compositions; the conservative inventory obtained from the multiplication of ORIGEN2-calculated isotopic compositions by isotopic correction factors; the conservative inventory of only U, Pu and {sup 241}Am. The results show that the minimum burnup for three cases are 990,6190 and 7270 MWd/tU, respectively in the case of 5.0 wt% initial enriched spent fuel. (author). 74 refs., 68 figs., 35 tabs.
Beridze, George; Kowalski, Piotr M
2014-12-18
Ability to perform a feasible and reliable computation of thermochemical properties of chemically complex actinide-bearing materials would be of great importance for nuclear engineering. Unfortunately, density functional theory (DFT), which on many instances is the only affordable ab initio method, often fails for actinides. Among various shortcomings, it leads to the wrong estimate of enthalpies of reactions between actinide-bearing compounds, putting the applicability of the DFT approach to the modeling of thermochemical properties of actinide-bearing materials into question. Here we test the performance of DFT+U method--a computationally affordable extension of DFT that explicitly accounts for the correlations between f-electrons - for prediction of the thermochemical properties of simple uranium-bearing molecular compounds and solids. We demonstrate that the DFT+U approach significantly improves the description of reaction enthalpies for the uranium-bearing gas-phase molecular compounds and solids and the deviations from the experimental values are comparable to those obtained with much more computationally demanding methods. Good results are obtained with the Hubbard U parameter values derived using the linear response method of Cococcioni and de Gironcoli. We found that the value of Coulomb on-site repulsion, represented by the Hubbard U parameter, strongly depends on the oxidation state of uranium atom. Last, but not least, we demonstrate that the thermochemistry data can be successfully used to estimate the value of the Hubbard U parameter needed for DFT+U calculations.
Kosar, Naveen; Mahmood, Tariq; Ayub, Khurshid
2017-12-01
Benchmark study has been carried out to find a cost effective and accurate method for bond dissociation energy (BDE) of carbon halogen (Csbnd X) bond. BDE of C-X bond plays a vital role in chemical reactions, particularly for kinetic barrier and thermochemistry etc. The compounds (1-16, Fig. 1) with Csbnd X bond used for current benchmark study are important reactants in organic, inorganic and bioorganic chemistry. Experimental data of Csbnd X bond dissociation energy is compared with theoretical results. The statistical analysis tools such as root mean square deviation (RMSD), standard deviation (SD), Pearson's correlation (R) and mean absolute error (MAE) are used for comparison. Overall, thirty-one density functionals from eight different classes of density functional theory (DFT) along with Pople and Dunning basis sets are evaluated. Among different classes of DFT, the dispersion corrected range separated hybrid GGA class along with 6-31G(d), 6-311G(d), aug-cc-pVDZ and aug-cc-pVTZ basis sets performed best for bond dissociation energy calculation of C-X bond. ωB97XD show the best performance with less deviations (RMSD, SD), mean absolute error (MAE) and a significant Pearson's correlation (R) when compared to experimental data. ωB97XD along with Pople basis set 6-311g(d) has RMSD, SD, R and MAE of 3.14 kcal mol-1, 3.05 kcal mol-1, 0.97 and -1.07 kcal mol-1, respectively.
Comparison of accelerometry stride time calculation methods.
Norris, Michelle; Kenny, Ian C; Anderson, Ross
2016-09-06
Inertial sensors such as accelerometers and gyroscopes can provide a multitude of information on running gait. Running parameters such as stride time and ground contact time can all be identified within tibial accelerometry data. Within this, stride time is a popular parameter of interest, possibly due to its role in running economy. However, there are multiple methods utilised to derive stride time from tibial accelerometry data, some of which may offer complications when implemented on larger data files. Therefore, the purpose of this study was to compare previously utilised methods of stride time derivation to an original proposed method, utilising medio-lateral tibial acceleration data filtered at 2Hz, allowing for greater efficiency in stride time output. Tibial accelerometry data from six participants training for a half marathon were utilised. One right leg run was randomly selected for each participant, in which five consecutive running stride times were calculated. Four calculation methods were employed to derive stride time. A repeated measures analysis of variance (ANOVA) identified no significant difference in stride time between stride time calculation methods (p=1.00), whilst intra-class coefficient values (all >0.95) and coefficient of variance values (all method possibly offers a simplified technique for stride time output during running gait analysis. This method may be less influenced by "double peak" error and minor fluctuations within the data, allowing for accurate and efficient automated data output in both real time and post processing. Copyright © 2016 Elsevier Ltd. All rights reserved.
Monte Carlo benchmark calculations of energy deposition by electron/photon showers up to 1 GeV
International Nuclear Information System (INIS)
Mehlhorn, T.A.; Halbleib, J.A.
1983-01-01
Over the past several years the TIGER series of coupled electron/photon Monte Carlo transport codes has been applied to a variety of problems involving nuclear and space radiations, electron accelerators, and radioactive sources. In particular, they have been used at Sandia to simulate the interaction of electron beams, generated by pulsed-power accelerators, with various target materials for weapons effect simulation, and electron beam fusion. These codes are based on the ETRAN system which was developed for an energy range from about 10 keV up to a few tens of MeV. In this paper we will discuss the modifications that were made to the TIGER series of codes in order to extend their applicability to energies of interest to the high energy physics community (up to 1 GeV). We report the results of a series of benchmark calculations of the energy deposition by high energy electron beams in various materials using the modified codes. These results are then compared with the published results of various experimental measurements and other computational models
Reactor fuel depletion benchmark of TINDER
International Nuclear Information System (INIS)
Martin, W.J.; Oliveira, C.R.E. de; Hecht, A.A.
2014-01-01
Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO 2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work
Comparison of Processor Performance of SPECint2006 Benchmarks of some Intel Xeon Processors
Directory of Open Access Journals (Sweden)
Abdul Kareem PARCHUR
2012-08-01
Full Text Available High performance is a critical requirement to all microprocessors manufacturers. The present paper describes the comparison of performance in two main Intel Xeon series processors (Type A: Intel Xeon X5260, X5460, E5450 and L5320 and Type B: Intel Xeon X5140, 5130, 5120 and E5310. The microarchitecture of these processors is implemented using the basis of a new family of processors from Intel starting with the Pentium 4 processor. These processors can provide a performance boost for many key application areas in modern generation. The scaling of performance in two major series of Intel Xeon processors (Type A: Intel Xeon X5260, X5460, E5450 and L5320 and Type B: Intel Xeon X5140, 5130, 5120 and E5310 has been analyzed using the performance numbers of 12 CPU2006 integer benchmarks, performance numbers that exhibit significant differences in performance. The results and analysis can be used by performance engineers, scientists and developers to better understand the performance scaling in modern generation processors.
Energy Technology Data Exchange (ETDEWEB)
Kotsarev, Alexander; Lizorkin, Mikhail [National Research Centre ' Kurchatov Institute' , Moscow (Russian Federation); Bencik, Marek; Hadek, Jan [UJV Rez, a.s., Rez (Czech Republic); Kozmenkov, Yaroslav; Kliem, Soeren [Helmholtz-Zentrum Dresden-Rossendorf (HZDR) e.V., Dresden (Germany)
2016-09-15
The 7th AER dynamic benchmark is a continuation of the efforts to validate the codes systematically for the estimation of the transient behavior of VVER type nuclear power plants. The main part of the benchmark is the simulation of the re-connection of an isolated circulation loop with low temperature in a VVER-440 plant. This benchmark was calculated by the National Research Centre ''Kurchatov Institute'' (with the code ATHLET/BIPR-VVER), UJV Rez (with the code RELAP5-3D {sup copyright}) and HZDR (with the code DYN3D/ATHLET). The paper gives an overview of the behavior of the main thermal hydraulic and neutron kinetic parameters in the provided solutions.
Screening Breast MRI Outcomes in Routine Clinical Practice: Comparison to BI-RADS Benchmarks
Strigel, Roberta M.; Rollenhagen, Jennifer; Burnside, Elizabeth S.; Elezaby, Mai; Fowler, Amy M.; Kelcz, Frederick; Salkowski, Lonie; DeMartini, Wendy B.
2017-01-01
Rationale and Objectives The BI-RADS Atlas 5th Edition includes screening breast magnetic resonance imaging (MRI) outcome benchmarks. However, the metrics are from expert practices and clinical trials of women with hereditary breast cancer predispositions, and it is unknown if they are appropriate for routine practice. We evaluated screening breast MRI audit outcomes in routine practice across a spectrum of elevated risk patients. Materials and Methods This Institutional Review Board-approved, Health Insurance Portability and Accountability Act-compliant retrospective study included all consecutive screening breast MRI examinations from July 1, 2010 to June 30, 2013. Examination indications were categorized as gene mutation carrier (GMC), personal history (PH) breast cancer, family history (FH) breast cancer, chest radiation, and atypia/lobular carcinoma in situ (LCIS). Outcomes were determined by pathology and/or ≥12 months clinical and/or imaging follow-up. We calculated abnormal interpretation rate (AIR), cancer detection rate (CDR), positive predictive value of recommendation for tissue diagnosis (PPV2) and biopsy performed (PPV3), and median size and percentage of node-negative invasive cancers. Results Eight hundred and sixty examinations were performed in 566 patients with a mean age of 47 years. Indications were 367 of 860 (42.7%) FH, 365 of 860 (42.4%) PH, 106 of 860 (12.3%) GMC, 14 of 860 (1.6%) chest radiation, and 8 of 22 (0.9%) atypia/LCIS. The AIR was 134 of 860 (15.6%). Nineteen cancers were identified (13 invasive, 4 DCIS, two lymph nodes), resulting in CDR of 19 of 860 (22.1 per 1000), PPV2 of 19 of 88 (21.6%), and PPV3 of 19 of 80 (23.8%). Of 13 invasive breast cancers, median size was 10 mm, and 8 of 13 were node negative (61.5%). Conclusions Performance outcomes of screening breast MRI in routine clinical practice across a spectrum of elevated risk patients met the American College of Radiology Breast Imaging Reporting and Data System
International Nuclear Information System (INIS)
Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.
1991-01-01
Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems
Singlet extensions of the standard model at LHC Run 2: benchmarks and comparison with the NMSSM
Energy Technology Data Exchange (ETDEWEB)
Costa, Raul [Centro de Física Teórica e Computacional, Faculdade de Ciências,Universidade de Lisboa, Campo Grande, Edifício C8 1749-016 Lisboa (Portugal); Departamento de Física da Universidade de Aveiro,Campus de Santiago, 3810-183 Aveiro (Portugal); Mühlleitner, Margarete [Institute for Theoretical Physics, Karlsruhe Institute of Technology,76128 Karlsruhe (Germany); Sampaio, Marco O.P. [Departamento de Física da Universidade de Aveiro,Campus de Santiago, 3810-183 Aveiro (Portugal); CIDMA - Center for Research Development in Mathematics and Applications,Campus de Santiago, 3810-183 Aveiro (Portugal); Santos, Rui [Centro de Física Teórica e Computacional, Faculdade de Ciências,Universidade de Lisboa, Campo Grande, Edifício C8 1749-016 Lisboa (Portugal); ISEL - Instituto Superior de Engenharia de Lisboa,Instituto Politécnico de Lisboa, 1959-007 Lisboa (Portugal)
2016-06-07
The Complex singlet extension of the Standard Model (CxSM) is the simplest extension that provides scenarios for Higgs pair production with different masses. The model has two interesting phases: the dark matter phase, with a Standard Model-like Higgs boson, a new scalar and a dark matter candidate; and the broken phase, with all three neutral scalars mixing. In the latter phase Higgs decays into a pair of two different Higgs bosons are possible. In this study we analyse Higgs-to-Higgs decays in the framework of singlet extensions of the Standard Model (SM), with focus on the CxSM. After demonstrating that scenarios with large rates for such chain decays are possible we perform a comparison between the NMSSM and the CxSM. We find that, based on Higgs-to-Higgs decays, the only possibility to distinguish the two models at the LHC run 2 is through final states with two different scalars. This conclusion builds a strong case for searches for final states with two different scalars at the LHC run 2. Finally, we propose a set of benchmark points for the real and complex singlet extensions to be tested at the LHC run 2. They have been chosen such that the discovery prospects of the involved scalars are maximised and they fulfil the dark matter constraints. Furthermore, for some of the points the theory is stable up to high energy scales. For the computation of the decay widths and branching ratios we developed the Fortran code sHDECAY, which is based on the implementation of the real and complex singlet extensions of the SM in HDECAY.
Comparison of calculation and simulation of evacuation in real buildings
Szénay, Martin; Lopušniak, Martin
2018-03-01
Each building must meet requirements for safe evacuation in order to prevent casualties. Therefore methods for evaluation of evacuation are used when designing buildings. In the paper, calculation methods were tested on three real buildings. The testing used methods of evacuation time calculation pursuant to Slovak standards and evacuation time calculation using the buildingExodus simulation software. If calculation methods have been suitably selected taking into account the nature of evacuation and at the same time if correct values of parameters were entered, we will be able to obtain almost identical times of evacuation in comparison with real results obtained from simulation. The difference can range from 1% to 27%.
A thermo-mechanical benchmark calculation of an hexagonal can in the BTI accident with ABAQUS code
International Nuclear Information System (INIS)
Zucchini, A.
1988-07-01
The thermo-mechanical behaviour of an hexagonal can in a benchmark problem (simulating the conditions of a BTI accident in a fuel assembly) is examined by means of the ABAQUS code: the effects of the geometric nonlinearity are shown and the results are compared with those of a previous analysis performed with the INCA code. (author)
Sylvetsky, Nitai; Kesharwani, Manoj K; Martin, Jan M L
2017-10-07
We have developed a new basis set family, denoted as aug-cc-pVnZ-F12 (or aVnZ-F12 for short), for explicitly correlated calculations. The sets included in this family were constructed by supplementing the corresponding cc-pVnZ-F12 sets with additional diffuse functions on the higher angular momenta (i.e., additional d-h functions on non-hydrogen atoms and p-g on hydrogen atoms), optimized for the MP2-F12 energy of the relevant atomic anions. The new basis sets have been benchmarked against electron affinities of the first- and second-row atoms, the W4-17 dataset of total atomization energies, the S66 dataset of noncovalent interactions, the Benchmark Energy and Geometry Data Base water cluster subset, and the WATER23 subset of the GMTKN24 and GMTKN30 benchmark suites. The aVnZ-F12 basis sets displayed excellent performance, not just for electron affinities but also for noncovalent interaction energies of neutral and anionic species. Appropriate CABSs (complementary auxiliary basis sets) were explored for the S66 noncovalent interaction benchmark: between similar-sized basis sets, CABSs were found to be more transferable than generally assumed.
Energy Technology Data Exchange (ETDEWEB)
Gerhard Strydom
2014-04-01
The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1, a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.
Cousineau, Sarah M
2005-01-01
Space charge effects are a major contributor to beam halo and emittance growth leading to beam loss in high intensity, low energy accelerators. As future accelerators strive towards unprecedented levels of beam intensity and beam loss control, a more comprehensive understanding of space charge effects is required. A wealth of simulation tools have been developed for modeling beams in linacs and rings, and with the growing availability of high-speed computing systems, computationally expensive problems that were inconceivable a decade ago are now being handled with relative ease. This has opened the field for realistic simulations of space charge effects, including detailed benchmarks with experimental data. A great deal of effort is being focused in this direction, and several recent benchmark studies have produced remarkably successful results. This paper reviews the achievements in space charge benchmarking in the last few years, and discusses the challenges that remain.
Energy Technology Data Exchange (ETDEWEB)
Primm III, RT
2002-05-29
This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the US during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the computational benchmarks and for those experimental benchmarks that the US and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.
ENDF/B-V, LIB-V, and the CSEWG benchmarks
International Nuclear Information System (INIS)
Kidman, R.B.
1981-08-01
A 70-group library, LIB-V, generated with the NJOY processing code from ENDF/B-V, is tested on most of the Cross Section Evaluation Working Group (CSEWG) fast reactor benchmarks. Every experimental measurement reported in the benchmark specifications is compared to both diffusion theory and transport theory calculations. Several comparisons with prior benchmark calculations attempt to assess the effects of data and code improvements
Comparison of supercomputers and mini-supercomputers for computational fluid dynamics calculations
International Nuclear Information System (INIS)
Gentzsch, W.
1988-01-01
Computational fluid dynamics (CFD) is a powerful tool for the simulation of complex fluid dynamics problems. In the future, the progress in CFD will depend on efficient algorithms as well as on the power and storage capacity of the computers available. A careful study and comparison of these supercomputers, therefore, is necessary. The following paper presents a short description of the Engineering and Scientific Model Benchmark, the supercomputers and mini-supercomputers under consideration, and a discussion of the benchmark results
International Nuclear Information System (INIS)
Bencik, M.; Hadek, J.
2011-01-01
The paper gives a brief survey of the seventh three-dimensional AER dynamic benchmark calculation results received with the codes DYN3D and RELAP5-3D at Nuclear Research Institute Rez. This benchmark was defined at the twentieth AER Symposium in Hanassari (Finland). It is focused on investigation of transient behaviour in a WWER-440 nuclear power plant. Its initiating event is opening of the main isolation valve and re-connection of the loop with its main circulation pump in operation. The WWER-440 plant is at the end of the first fuel cycle and in hot full power conditions. Stationary and burnup calculations were performed with the code DYN3D. Transient calculation was made with the system code RELAP5-3D. The two-group homogenized cross sections library HELGD05 created by HELIOS code was used for the generation of reactor core neutronic parameters. The detailed six loops model of NPP Dukovany was adopted for the seventh AER dynamic benchmark purposes. The RELAP5-3D full core neutronic model was coupled with 49 core thermal-hydraulic channels and 8 reflector channels connected with the three-dimensional model of the reactor vessel. The detailed nodalization of reactor downcomer, lower and upper plenum was used. Mixing in lower and upper plenum was simulated. The first part of paper contains a brief characteristic of RELAP5-3D system code and a short description of NPP input deck and reactor core model. The second part shows the time dependencies of important global and local parameters. (Authors)
R2/R0-WTR decommissioning cost. Comparison and benchmarking analysis
International Nuclear Information System (INIS)
Varley, Geoff; Rusch, Chris
2001-10-01
SKI charged NAC International with the task of determining whether or not the decommissioning cost estimates of R2/R0 (hereafter simply referred to as R2) and Aagesta research reactors are reasonable. The associated work was performed in two phases. The objective in Phase I was to make global comparisons of the R2 and Aagesta decommissioning estimates with the estimates/actual costs for the decommissioning of similar research reactors in other countries. This report presents the results of the Phase II investigations. Phase II focused on selected discrete work packages within the decommissioning program of the WTR reactor. To the extent possible a comparison of those tasks with estimates for the R2 reactor has been made, as a basis for providing an opinion on the reasonableness of the R2 estimate. The specific WTR packages include: reactor vessel and internals dismantling; biological shield dismantling; primary coolant piping dismantling; electrical equipment removal; waste packaging; transportation and disposal of radioactive concrete and reactor components; project management, licensing and engineering; and removal of ancillary facilities. The specific tasks were characterised and analysed in terms of fundamental parameters including: task definition; labour hours expended; labour cost; labour productivity; length of work week; working efficiency; working environment and impact on job execution; external costs (contract labour, materials and equipment); total cost; waste volumes; and waste packaging and transport costs. Based on such detailed raw data, normalised unit resources have been derived for selected parts of the decommissioning program, as a first step towards developing benchmarking data for D and D activities at research reactors. Several general conclusions emerged from the WTR decommissioning project. Site characterisation can confirm or negate major assumptions, quantify waste volumes, delineate obstacles to completing work, provide an understanding
Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey
2016-04-01
Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT
Comparison of three-dimensional ocean general circulation models on a benchmark problem
International Nuclear Information System (INIS)
Chartier, M.
1990-12-01
A french and an american Ocean General Circulation Models for deep-sea disposal of radioactive wastes are compared on a benchmark test problem. Both models are three-dimensional. They solve the hydrostatic primitive equations of the ocean with two different finite difference techniques. Results show that the dynamics simulated by both models are consistent. Several methods for the running of a model from a known state are tested in the French model: the diagnostic method, the prognostic method, the acceleration of convergence and the robust-diagnostic method
The ERCP quality network benchmarking project: a preliminary comparison of practice in UK and USA.
Oppong, Kofi W; Romagnuolo, Joseph; Cotton, Peter B
2012-07-01
The Endoscopic retrograde cholangiopancreatography (ERCP) Quality Network is a voluntary system for submission of data to generate individual report cards and benchmarking. The aim of this study was to compare aspects of ERCP practice between USA and UK participants. Analysis was limited to USA and UK based endoscopists who had each entered more than 30 cases. A number of practice and performance measures were studied including, rates of deep biliary cannulation, sedation use and success in bile duct stone removal. Patients attending for routine and emergency ERCP in participating tertiary and secondary care units in the UK and USA. 61 US endoscopists performed 18 182 procedures and 16 UK endoscopists 3172, respectively. The UK participants performed less complex procedures as judged by the accepted complexity grading system, 8% versus 35% at grade 3, psmall bile duct stones (success rate was 96% compared with 99%, psuccess in both cohorts, albeit significantly better in the USA. The inexorable drive to greater accountability and transparency of outcomes in endoscopic practice is likely to lead to increased participation in subsequent benchmarking projects.
International Nuclear Information System (INIS)
Lee, Kyung-Hoon; Kim, Kang-Seog; Cho, Jin-Young; Song, Jae-Seung; Noh, Jae-Man; Lee, Chung-Chan
2008-01-01
The IAEA's gas-cooled reactor program has coordinated international cooperation for an evaluation of a high temperature gas-cooled reactor's performance, which includes a validation of the physics analysis codes and the performance models for the proposed GT-MHR. This benchmark problem consists of the pin and block calculations and the reactor physics of the control rod worth for the GT-MHR with a weapon grade plutonium fuel. Benchmark analysis has been performed by using the HELIOS/MASTER deterministic code package and the MCNP Monte Carlo code. The deterministic code package adopts a conventional 2-step procedure in which a few group constants are generated by a transport lattice calculation, and the reactor physics analysis is performed by a 3-dimensional diffusion calculation. In order to solve particular modeling issues in GT-MHR, recently developed technologies were utilized and new analysis procedure was devised. Double heterogeneity effect could be covered by using the reactivity-equivalent physical transformation (RPT) method. Strong core-reflector interaction could be resolved by applying an equivalence theory to the generation of the reflector cross sections. In order to accurately handle with very large control rods which are asymmetrically located in a fuel and a reflector block, the surface dependent discontinuity factors (SDFs) were considered in applying an equivalence theory. A new method has been devised to consider SDFs without any modification of the nodal solver in MASTER. All computational results of the HELIOS/MASTER code package were compared with those of MCNP. The multiplication factors of HELIOS for the pin cells are in very good agreement with those of MCNP to within a maximum error of 693 pcm Δρ. The maximum differences of the multiplication factors for the fuel blocks are about 457 pcm Δρ and the control rod worths of HELIOS are consistent with those of MCNP to within a maximum error of 3.09%. On considering a SDF in the core
International Nuclear Information System (INIS)
Khan, M.J.H.; Sarker, M.M.; Islam, S.M.A.
2013-01-01
Highlights: ► To validate the SRAC2006 code system for TRIGA neutronics calculations. ► TRX and BAPL lattices are treated as standard benchmarks for this purpose. ► To compare the calculated results with experiment as well as MCNP values in this study. ► The study demonstrates a good agreement with the experiment and the MCNP results. ► Thus, this analysis reflects the validation study of the SRAC2006 code system. - Abstract: The goal of this study is to present the validation study of the SRAC2006 code system based on evaluated nuclear data libraries ENDF/B-VII.0 and JENDL-3.3 for neutronics analysis of TRIGA Mark-II Research Reactor at AERE, Bangladesh. This study is achieved through the analysis of integral parameters of TRX and BAPL benchmark lattices of thermal reactors. In integral measurements, the thermal reactor lattices TRX-1, TRX-2, BAPL-UO 2 -1, BAPL-UO 2 -2 and BAPL-UO 2 -3 are treated as standard benchmarks for validating/testing the SRAC2006 code system as well as nuclear data libraries. The integral parameters of the said lattices are calculated using the collision probability transport code PIJ of the SRAC2006 code system at room temperature 20 °C based on the above libraries. The calculated integral parameters are compared to the measured values as well as the MCNP values based on the Chinese evaluated nuclear data library CENDL-3.0. It was found that in most cases, the values of integral parameters demonstrate a good agreement with the experiment and the MCNP results. In addition, the group constants in SRAC format for TRX and BAPL lattices in fast and thermal energy range respectively are compared between the above libraries and it was found that the group constants are identical with very insignificant difference. Therefore, this analysis reflects the validation study of the SRAC2006 code system based on evaluated nuclear data libraries JENDL-3.3 and ENDF/B-VII.0 and can also be essential to implement further neutronics calculations
International Nuclear Information System (INIS)
Daavittila, Antti; Haemaelaeinen, Anitta; Kyrki-Rajamaeki, Riitta
2003-01-01
All of the three exercises of the Organization for Economic Cooperation and Development/Nuclear Regulatory Commission pressurized water reactor main steam line break (PWR MSLB) benchmark were calculated at VTT, the Technical Research Centre of Finland. For the first exercise, the plant simulation with point-kinetic neutronics, the thermal-hydraulics code SMABRE was used. The second exercise was calculated with the three-dimensional reactor dynamics code TRAB-3D, and the third exercise with the combination TRAB-3D/SMABRE. VTT has over ten years' experience of coupling neutronic and thermal-hydraulic codes, but this benchmark was the first time these two codes, both developed at VTT, were coupled together. The coupled code system is fast and efficient; the total computation time of the 100-s transient in the third exercise was 16 min on a modern UNIX workstation. The results of all the exercises are similar to those of the other participants. In order to demonstrate the effect of secondary circuit modeling on the results, three different cases were calculated. In case 1 there is no phase separation in the steam lines and no flow reversal in the aspirator. In case 2 the flow reversal in the aspirator is allowed, but there is no phase separation in the steam lines. Finally, in case 3 the drift-flux model is used for the phase separation in the steam lines, but the aspirator flow reversal is not allowed. With these two modeling variations, it is possible to cover a remarkably broad range of results. The maximum power level reached after the reactor trip varies from 534 to 904 MW, the range of the time of the power maximum being close to 30 s. Compared to the total calculated transient time of 100 s, the effect of the secondary side modeling is extremely important
Energy Technology Data Exchange (ETDEWEB)
Newton, T. H.; Wilson, E. H; Bergeron, A.; Horelik, N.; Stevens, J. (Nuclear Engineering Division); (MIT Nuclear Reactor Lab.)
2011-03-02
The Massachusetts Institute of Technology Reactor (MITR-II) is a research reactor in Cambridge, Massachusetts designed primarily for experiments using neutron beam and in-core irradiation facilities. It delivers a neutron flux comparable to current LWR power reactors in a compact 6 MW core using Highly Enriched Uranium (HEU) fuel. In the framework of its non-proliferation policies, the international community presently aims to minimize the amount of nuclear material available that could be used for nuclear weapons. In this geopolitical context, most research and test reactors both domestic and international have started a program of conversion to the use of Low Enriched Uranium (LEU) fuel. A new type of LEU fuel based on an alloy of uranium and molybdenum (UMo) is expected to allow the conversion of U.S. domestic high performance reactors like the MITR-II reactor. Towards this goal, comparisons of MCNP5 Monte Carlo neutronic modeling results for HEU and LEU cores have been performed. Validation of the model has been based upon comparison to HEU experimental benchmark data for the MITR-II. The objective of this work was to demonstrate a model which could represent the experimental HEU data, and therefore could provide a basis to demonstrate LEU core performance. This report presents an overview of MITR-II model geometry and material definitions which have been verified, and updated as required during the course of validation to represent the specifications of the MITR-II reactor. Results of calculations are presented for comparisons to historical HEU start-up data from 1975-1976, and to other experimental benchmark data available for the MITR-II Reactor through 2009. This report also presents results of steady state neutronic analysis of an all-fresh LEU fueled core. Where possible, HEU and LEU calculations were performed for conditions equivalent to HEU experiments, which serves as a starting point for safety analyses for conversion of MITR-II from the use of HEU
Lorenz, Marco; Civalleri, Bartolomeo; Maschio, Lorenzo; Sgroi, Mauro; Pullini, Daniele
2014-09-15
The physisorption of water on graphene is investigated with the hybrid density functional theory (DFT)-functional B3LYP combined with empirical corrections, using moderate-sized basis sets such as 6-31G(d). This setup allows to model the interaction of water with graphene going beyond the quality of classical or semiclassical simulations, while still keeping the computational costs under control. Good agreement with respect to Coupled Cluster with singles and doubles excitations and perturbative triples (CCSD(T)) results is achieved for the adsorption of a single water molecule in a benchmark with two DFT-functionals (Perdew/Burke/Ernzerhof (PBE), B3LYP) and Grimme's empirical dispersion and counterpoise corrections. We apply the same setting to graphene supported by epitaxial hexagonal boron nitride (h-BN), leading to an increased interaction energy. To further demonstrate the achievement of the empirical corrections, we model, entirely from first principles, the electronic properties of graphene and graphene supported by h-BN covered with different amounts of water (one, 10 water molecules per cell and full coverage). The effect of h-BN on these properties turns out to be negligibly small, making it a good candidate for a substrate to grow graphene on. Copyright © 2014 Wiley Periodicals, Inc.
Comparison of typical inelastic analysis predictions with benchmark problem experimental results
International Nuclear Information System (INIS)
Clinard, J.A.; Corum, J.M.; Sartory, W.K.
1975-01-01
The results of exemplary inelastic analyses are presented for a series of experimental benchmark problems. Consistent analytical procedures and constitutive relations were used in each of the analyses, and published material behavior data were used in all cases. Two finite-element inelastic computer programs were employed. These programs implement the analysis procedures and constitutive equations for Type 304 stainless steel that are currently used in many analyses of elevated-temperature nuclear reactor system components. The analysis procedures and constitutive relations are briefly discussed, and representative analytical results are presented and compared to the test data. The results that are presented demonstrate the feasibility of performing inelastic analyses, and they are indicative of the general level of agreement that the analyst might expect when using conventional inelastic analysis procedures. (U.S.)
Comparison of typical inelastic analysis predictions with benchmark problem experimental results
International Nuclear Information System (INIS)
Clinard, J.A.; Corum, J.M.; Sartory, W.K.
1975-01-01
The results of exemplary inelastic analyses for experimental benchmark problems on reactor components are presented. Consistent analytical procedures and constitutive relations were used in each of the analyses, and the material behavior data presented in the Appendix were used in all cases. Two finite-element inelastic computer programs were employed. These programs implement the analysis procedures and constitutive equations for type 304 stainless steel that are currently used in many analyses of elevated-temperature nuclear reactor system components. The analysis procedures and constitutive relations are briefly discussed, and representative analytical results are presented and compared to the test data. The results that are presented demonstrate the feasibility of performing inelastic analyses for the types of problems discussed, and they are indicative of the general level of agreement that the analyst might expect when using conventional inelastic analysis procedures. (U.S.)
ROEED, K; BRUGGER, M; CALVIANI, M; CERUTTI, F; CHIN, P W; CHRISTOV, A; FERRARI, A; KRAMER, D; KWEE, R E; LEBBOS, E; LECHNER, A; LOSITO, R; MALA, P; MEREGHETTI, A; NOWAK, E M; SINUELA PASTOR, D; SPIEZIA, G; THORNTON, A; VERSACI, R; VLACHOUDIS, V; WEISS, C; CERN. Geneva. ATS Department
2011-01-01
At the LHC various underground areas are partly equipped with commercial electronic devices not specifically designed to be radiation tolerant. A major concern is therefore radiation induced failures in particular due to Single Event Upsets (SEU). To ensure safe and acceptable operation of the LHC electronics a combination of both FLUKA Monte Carlo simulations and dedicated online monitoring is applied to determine the expected radiation levels in critical areas. The LHC Radiation Monitor (RadMon) which is used for this purpose has already been extensively calibrated for its radiation response in various irradiation facilities. It is nevertheless of high importance to also provide a real LHC application benchmark to validate the approach of combined simulations and montoring to correctly measure and predict radiation levels. This report therefore presents a comparison between FLUKA Monte Carlo simulations and measurements results using the RadMon in the LHC collimation region IR7. The work is carried out with...
Grenier, Christophe; Roux, Nicolas; Anbergen, Hauke; Collier, Nathaniel; Costard, Francois; Ferrry, Michel; Frampton, Andrew; Frederick, Jennifer; Holmen, Johan; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Orgogozo, Laurent; Rivière, Agnès; Rühaak, Wolfram; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik
2015-04-01
The impacts of climate change in boreal regions has received considerable attention recently due to the warming trends that have been experienced in recent decades and are expected to intensify in the future. Large portions of these regions, corresponding to permafrost areas, are covered by water bodies (lakes, rivers) that interact with the surrounding permafrost. For example, the thermal state of the surrounding soil influences the energy and water budget of the surface water bodies. Also, these water bodies generate taliks (unfrozen zones below) that disturb the thermal regimes of permafrost and may play a key role in the context of climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model the past and future evolution of landscapes, rivers, lakes and associated groundwater systems in a changing climate. However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, and the lack of study can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. Numerical approaches can only be validated against analytical solutions for a purely thermic 1D equation with phase change (e.g. Neumann, Lunardini). When it comes to the coupled TH system (coupling two highly non-linear equations), the only possible approach is to compare the results from different codes to provided test cases and/or to have controlled experiments for validation. Such inter-code comparisons can propel discussions to try to improve code performances. A benchmark exercise was initialized in 2014 with a kick-off meeting in Paris in November. Participants from USA, Canada, Germany, Sweden and France convened, representing altogether 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones. They
Perera, Ajith; Gauss, Jürgen; Verma, Prakash; Morales, Jorge A.
2017-04-01
We present a parallel implementation to compute electron spin resonance g-tensors at the coupled-cluster singles and doubles (CCSD) level which employs the ACES III domain-specific software tools for scalable parallel programming, i.e., the super instruction architecture language and processor (SIAL and SIP), respectively. A unique feature of the present implementation is the exact (not approximated) inclusion of the five one- and two-particle contributions to the g-tensor [i.e., the mass correction, one- and two-particle paramagnetic spin-orbit, and one- and two-particle diamagnetic spin-orbit terms]. Like in a previous implementation with effective one-electron operators [J. Gauss et al., J. Phys. Chem. A 113, 11541-11549 (2009)], our implementation utilizes analytic CC second derivatives and, therefore, classifies as a true CC linear-response treatment. Therefore, our implementation can unambiguously appraise the accuracy of less costly effective one-particle schemes and provide a rationale for their widespread use. We have considered a large selection of radicals used previously for benchmarking purposes including those studied in earlier work and conclude that at the CCSD level, the effective one-particle scheme satisfactorily captures the two-particle effects less costly than the rigorous two-particle scheme. With respect to the performance of density functional theory (DFT), we note that results obtained with the B3LYP functional exhibit the best agreement with our CCSD results. However, in general, the CCSD results agree better with the experimental data than the best DFT/B3LYP results, although in most cases within the rather large experimental error bars.
Bates, Kevin R.; Daniels, Andrew D.; Scuseria, Gustavo E.
1998-01-01
We report a comparison of two linear-scaling methods which avoid the diagonalization bottleneck of traditional electronic structure algorithms. The Chebyshev expansion method (CEM) is implemented for carbon tight-binding calculations of large systems and its memory and timing requirements compared to those of our previously implemented conjugate gradient density matrix search (CG-DMS). Benchmark calculations are carried out on icosahedral fullerenes from C60 to C8640 and the linear scaling memory and CPU requirements of the CEM demonstrated. We show that the CPU requisites of the CEM and CG-DMS are similar for calculations with comparable accuracy.
Comparison of analytical methods for calculation of wind loads
Minderman, Donald J.; Schultz, Larry L.
1989-01-01
The following analysis is a comparison of analytical methods for calculation of wind load pressures. The analytical methods specified in ASCE Paper No. 3269, ANSI A58.1-1982, the Standard Building Code, and the Uniform Building Code were analyzed using various hurricane speeds to determine the differences in the calculated results. The winds used for the analysis ranged from 100 mph to 125 mph and applied inland from the shoreline of a large open body of water (i.e., an enormous lake or the ocean) a distance of 1500 feet or ten times the height of the building or structure considered. For a building or structure less than or equal to 250 feet in height acted upon by a wind greater than or equal to 115 mph, it was determined that the method specified in ANSI A58.1-1982 calculates a larger wind load pressure than the other methods. For a building or structure between 250 feet and 500 feet tall acted upon by a wind rangind from 100 mph to 110 mph, there is no clear choice of which method to use; for these cases, factors that must be considered are the steady-state or peak wind velocity, the geographic location, the distance from a large open body of water, and the expected design life and its risk factor.
Benchmark CCSD-SAPT study of rare gas dimers with comparison to MP-SAPT and DFT-SAPT
Shirkov, Leonid; Sladek, Vladimir
2017-11-01
Symmetry-adapted perturbation theory (SAPT) based on coupled cluster approach with single and double excitations (CCSD) treatment of intramonomer electron correlation effects was applied to study rare gas homodimers from He2 to Kr2. The obtained benchmark CCSD-SAPT energies, including cumulant contributions to first order exchange and second-order exchange-induction terms, were then compared to their counterparts found using other methods—MP-SAPT based on many-body Møller-Plesset perturbation theory and DFT-SAPT based on density functional theory. The SAPT terms up to the second-order were calculated with the basis sets close to the complete basis set at the large range of interatomic distances R. It was shown that overestimation of the binding energies De found with DFT-SAPT reported in the work of Shirkov and Makarewicz [J. Chem. Phys. 142, 064102 (2015)] for Ar2 and Kr2 is mostly due to underestimation of the exchange energy Eexch(1 ) when comparing to the CCSD-SAPT benchmark. The CCSD-SAPT potentials were found to give the following values of the dissociation energies D0: 0.0006 cm-1 for He2, 16.71 cm-1 for Ne2, 85.03 cm-1 for Ar2, and 129.81 cm-1 for Kr2, which agree well with the values found from previously reported highly accurate ab initio supermolecular potentials and experimental data. The long-range dispersion coefficients C2n up to n = 6 that give the dispersion energy asymptotically equivalent to its SAPT counterpart were calculated from dynamic multipole polarizabilities at different levels of theory.
Knöös, Tommy; Wieslander, Elinore; Cozzi, Luca; Brink, Carsten; Fogliata, Antonella; Albers, Dirk; Nyström, Håkan; Lassen, Søren
2006-11-01
A study of the performance of five commercial radiotherapy treatment planning systems (TPSs) for common treatment sites regarding their ability to model heterogeneities and scattered photons has been performed. The comparison was based on CT information for prostate, head and neck, breast and lung cancer cases. The TPSs were installed locally at different institutions and commissioned for clinical use based on local procedures. For the evaluation, beam qualities as identical as possible were used: low energy (6 MV) and high energy (15 or 18 MV) x-rays. All relevant anatomical structures were outlined and simple treatment plans were set up. Images, structures and plans were exported, anonymized and distributed to the participating institutions using the DICOM protocol. The plans were then re-calculated locally and exported back for evaluation. The TPSs cover dose calculation techniques from correction-based equivalent path length algorithms to model-based algorithms. These were divided into two groups based on how changes in electron transport are accounted for ((a) not considered and (b) considered). Increasing the complexity from the relatively homogeneous pelvic region to the very inhomogeneous lung region resulted in less accurate dose distributions. Improvements in the calculated dose have been shown when models consider volume scatter and changes in electron transport, especially when the extension of the irradiated volume was limited and when low densities were present in or adjacent to the fields. A Monte Carlo calculated algorithm input data set and a benchmark set for a virtual linear accelerator have been produced which have facilitated the analysis and interpretation of the results. The more sophisticated models in the type b group exhibit changes in both absorbed dose and its distribution which are congruent with the simulations performed by Monte Carlo-based virtual accelerator.
International Nuclear Information System (INIS)
Hoffman, E.L.; Ammerman, D.J.
1995-01-01
A series of tests investigating dynamic pulse buckling of a cylindrical shell under axial impact is compared to several 2D and 3D finite element simulations of the event. The purpose of the work is to investigate the performance of various analysis codes and element types on a problem which is applicable to radioactive material transport packages, and ultimately to develop a benchmark problem to qualify finite element analysis codes for the transport package design industry. During the pulse buckling tests, a buckle formed at each end of the cylinder, and one of the two buckles became unstable and collapsed. Numerical simulations of the test were performed using PRONTO, a Sandia developed transient dynamics analysis code, and ABAQUS/Explicit with both shell and continuum elements. The calculations are compared to the tests with respect to deformed shape and impact load history
Jansky, B; Turzik, Z; Kyncl, J; Cvachovec, F; Trykov, L A; Volkov, V S
2002-01-01
The neutron and gamma spectra measurements have been made for benchmark iron spherical assemblies with the diameter of 30, 50 and 100 cm. The sup 2 sup 5 sup 2 Cf neutron sources with different emissions were placed into the centre of iron spheres. In the first stage of the project, independent laboratories took part in the leakage spectra measurements. The proton recoil method was used with stilbene crystals and hydrogen proportional counters. The working range of spectrometers for neutrons is in energy range from 0.01 to 16 MeV, and for gamma from 0.40 to 12 MeV. Some adequate calculations have been carried out. The propose to carefully analyse the leakage mixed neutron and gamma spectrum from iron sphere of diameter 50 cm and then adopt that field as standard.
International Nuclear Information System (INIS)
Schenter, R.E.; Oliver, B.M.; Farrar, H. IV
1987-01-01
Spectrum integrated cross sections for /sup 6/Li and /sup 10/B from five benchmark fast reactor neutron fields are compared with calculated values obtained using the ENDF/B-V Cross Section Files. The benchmark fields include the Coupled Fast Reactivity Measurements Facility (CFRMF) at the Idaho National Engineering Laboratory, the 10% Enriched U-235 Critical Assembly (BIG-10) at Los Alamos National Laboratory, the Sigma Sigma and Fission Cavity fields of the BR-1 reactor at CEN/SCK, and the Intermediate-Energy Standard Neutron Field (ISNF) at the National Bureau of Standards. Results from least square analyses using the FERRET computer code to obtain adjusted cross section values and their uncertainties are presented. Input to these calculations include the above five benchmark data sets. These analyses indicate a need for revision in the ENDF/B-V files for the /sup 10/B cross section for energies above 50 keV
International Nuclear Information System (INIS)
Schenter, R.E.; Oliver, B.M.; Farrar, H. IV.
1986-06-01
Spectrum integrated cross sections for 6 Li and 10 B from five benchmark fast reactor neutron fields are compared with calculated values obtained using the ENDF/B-V Cross Section Files. The benchmark fields include the Coupled Fast Reactivity Measurements Facility (CFRMF) at the Idaho National Engineering Laboratory, the 10% Enriched U-235 Critical Assembly (BIG-10) at Los Alamos National Laboratory, the Sigma-Sigma and Fission Cavity fields of the BR-1 reactor at CEN/SCK, and the Intermediate Energy Standard Neutron Field (ISNF) at the National Bureau of Standards. Results from least square analyses using the FERRET computer code to obtain adjusted cross section values and their uncertainties are presented. Input to these calculations include the above five benchmark data sets. These analyses indicate a need for revision in the ENDF/B-V files for the 10 B and 6 Li cross sections for energies above 50 keV
A benchmark comparison of East and West German industrial labour productivity in 1954
Sleifer, Jaap
2002-01-01
This paper presents a level comparison of the East and West German industrial labour productivity in 1954. According to this estimate, which is based on the quantity approach, the East German industrial labour productivity in 1954 amounted from 61.6 to 64.7 percent of the West German level. For the
Leckey, Cara A C; Wheeler, Kevin R; Hafiychuk, Vasyl N; Hafiychuk, Halyna; Timuçin, Doğan A
2018-03-01
Ultrasonic wave methods constitute the leading physical mechanism for nondestructive evaluation (NDE) and structural health monitoring (SHM) of solid composite materials, such as carbon fiber reinforced polymer (CFRP) laminates. Computational models of ultrasonic wave excitation, propagation, and scattering in CFRP composites can be extremely valuable in designing practicable NDE and SHM hardware, software, and methodologies that accomplish the desired accuracy, reliability, efficiency, and coverage. The development and application of ultrasonic simulation approaches for composite materials is an active area of research in the field of NDE. This paper presents comparisons of guided wave simulations for CFRP composites implemented using four different simulation codes: the commercial finite element modeling (FEM) packages ABAQUS, ANSYS, and COMSOL, and a custom code executing the Elastodynamic Finite Integration Technique (EFIT). Benchmark comparisons are made between the simulation tools and both experimental laser Doppler vibrometry data and theoretical dispersion curves. A pristine and a delamination type case (Teflon insert in the experimental specimen) is studied. A summary is given of the accuracy of simulation results and the respective computational performance of the four different simulation tools. Published by Elsevier B.V.
International Nuclear Information System (INIS)
Bitter, M.; Gu, M.F.; Vainshtein, L.A.; Beiersdorfer, P.; Bertschinger, G.; Marchuk, O.; Bell, R.; LeBlanc, B.; Hill, K.W.; Johnson, D.; Roquemore, L.
2003-01-01
Dielectronic satellite spectra of helium-like argon, recorded with a high-resolution X-ray crystal spectrometer at the National Spherical Torus Experiment, were found to be inconsistent with existing predictions resulting in unacceptable values for the power balance and suggesting the unlikely existence of non-Maxwellian electron energy distributions. These problems were resolved with calculations from a new atomic code. It is now possible to perform reliable electron temperature measurements and to eliminate the uncertainties associated with determinations of non-Maxwellian distributions
Energy Technology Data Exchange (ETDEWEB)
Elsawi, Mohamed A., E-mail: Mohamed.elsawi@kustar.ac.ae; Hraiz, Amal S. Bin, E-mail: Amal.Hraiz@kustar.ac.ae
2015-11-15
Highlights: • AP1000 core configuration is challenging due to its high degree of heterogeneity. • The proposed code was used to model neutronics/TH behavior of the AP1000 reactor. • Enhanced modeling features in WIMS9 facilitated neutronics modeling of the reactor. • PARCS/TRACE coupled code system was used to model the temperature feedback effects. • Final results showed reasonable agreement with publically available reactor data. - Abstract: The objective of this paper is to assess the accuracy of the WIMS9/PARCS/TRACE code system for power density calculations of the Westinghouse AP1000™ nuclear reactor, as a representative of modern pressurized water reactors (Gen III+). The cross section libraries were generated using the lattice physics code WIMS9 (the commercial version of the legacy lattice code WIMSD). Nine different fuel assembly types were analyzed in WIMS9 to generate the two-group cross sections required by the PARCS core simulator. The nine fuel assembly types were identified based on the distribution of Pyrex discrete burnable absorber (Borosilicate glass) and integral fuel burnable absorber (IFBA) rods in each fuel assembly. The generated cross sections were passed to the coupled core simulator PARCS/TRACE which performed 3-D, full-core diffusion calculations from within the US NRC Symbolic Nuclear Analysis Package (SNAP) interface. The results which included: assembly power distribution, effective multiplication factor (k{sub eff}), radial and axial power density, and whole core depletion were compared to reference Monte Carlo results and to a published reactor data available in the AP1000 Design Control Document (DCD). The results of the study show acceptable accuracy of the WIMS9/PARCS/TRACE code in predicting the power density of the AP1000 core and, hence, establish its adequacy in the evaluation of the neutronics parameters of modern PWRs of similar designs. The work reported here is new in that it uses, for the first time, the
Directory of Open Access Journals (Sweden)
Wiji Suwarno
2017-02-01
Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.
Analysis of DOE s Roof Savings Calculator with Comparison to other Simulation Engines
Energy Technology Data Exchange (ETDEWEB)
New, Joshua Ryan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Huang, Yu [White Box Technologies, Salt Lake City, UT (United States); Levinson, Ronnen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mellot, Joe [The Garland Company, Cleveland, OH (United States); Sanyal, Jibonananda [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Childs, Kenneth W [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2014-01-01
A web-based Roof Savings Calculator (RSC) has been deployed for the Department of Energy as an industry-consensus tool to help building owners, manufacturers, distributors, contractors and researchers easily run complex roof and attic simulations. This tool employs the latest web technologies and usability design to provide an easy input interface to an annual simulation of hour-by-hour, whole-building performance using the world-class simulation tools DOE-2.1E and AtticSim. Building defaults were assigned based on national averages and can provide estimated annual energy and cost savings after the user selects nothing more than building location. In addition to cool reflective roofs, the RSC tool can simulate multiple roof and attic configurations including different roof slopes, above sheathing ventilation, radiant barriers, low-emittance surfaces, HVAC duct location, duct leakage rates, multiple layers of building materials, ceiling and deck insulation levels, and other parameters. A base case and energy-efficient alternative can be compared side-by-side to generate an energy/cost savings estimate between two buildings. The RSC tool was benchmarked against field data for demonstration homes in Ft. Irwin, CA. However, RSC gives different energy savings estimates than previous cool roof simulation tools so more thorough software and empirical validation proved necessary. This report consolidates much of the preliminary analysis for comparison of RSC s projected energy savings to that from other simulation engines.
Neale, Chris; Camfield, David; Reay, Jonathon; Stough, Con; Scholey, Andrew
2013-01-01
Over recent years there has been increasing research into both pharmaceutical and nutraceutical cognition enhancers. Here we aimed to calculate the effect sizes of positive cognitive effect of the pharmaceutical modafinil in order to benchmark the effect of two widely used nutraceuticals Ginseng and Bacopa (which have consistent acute and chronic cognitive effects, respectively). A search strategy was implemented to capture clinical studies into the neurocognitive effects of modafinil, Ginseng and Bacopa. Studies undertaken on healthy human subjects using a double‐blind, placebo‐controlled design were included. For each study where appropriate data were included, effect sizes (Cohen's d) were calculated for measures showing significant positive and negative effects of treatment over placebo. The highest effect sizes for cognitive outcomes were 0.77 for modafinil (visuospatial memory accuracy), 0.86 for Ginseng (simple reaction time) and 0.95 for Bacopa (delayed word recall). These data confirm that neurocognitive enhancement from well characterized nutraceuticals can produce cognition enhancing effects of similar magnitude to those from pharmaceutical interventions. Future research should compare these effects directly in clinical trials. PMID:23043278
Shielding benchmark problems, (2)
International Nuclear Information System (INIS)
Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.
1980-02-01
Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)
Burn-up TRIGA Mark II benchmark experiment
International Nuclear Information System (INIS)
Persic, A.; Ravnik, M.; Zagar, T.
1998-01-01
Different reactor codes are used for calculations of reactor parameters. The accuracy of the programs is tested through comparison of the calculated values with the experimental results. Well-defined and accurately measured benchmarks are required. The experimental results of reactivity measurements, fuel element reactivity worth distribution and fuel-up measurements are presented in this paper. The experiments were performed with partly burnt reactor core. The experimental conditions were well defined, so that the results can be used as a burn-up benchmark test case for a TRIGA Mark II reactor calculations.(author)
A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison
Jaboulay, J.-C.; Damian, F.; Douce, S.; Lopez, F.; Guenaut, C.; Aggery, A.; Poinot-Salanon, C.
2014-06-01
Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4®; to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4®; in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23,000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selcted (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. In these conditions two main subjects will be discussed: the Monte Carlo variance calculation and the assessment of the diffusion operator with two energy groups for the core calculation.
DEFF Research Database (Denmark)
Hougaard, Jens Leth; Tvede, Mich
2002-01-01
Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...
Comparison of the AMDAHL 470V/6 and the IBM 370/195 using benchmarks
Energy Technology Data Exchange (ETDEWEB)
Snider, D.R.; Midlock, J.L.; Hinds, A.R.; Engert, D.E.
1976-03-01
Six groups of jobs were run on the IBM 370/195 at the Applied Mathematics Division (AMD) of Argonne National Laboratory using the current production versions of OS/MVT 21.7 and ASP 3.1. The same jobs were then run on an AMDAHL 470V/6 at the AMDAHL manufacturing facilities in Sunnyvale, California, using the identical operating systems. Performances of the two machines are compared. Differences in the configurations were minimized. The memory size on each machine was the same, all software which had an impact on run times was the same, and the I/O configurations were as similar as possible. This allowed the comparison to be based on the relative performance of the two CPU's. As part of the studies preliminary to the acquisition of the IBM 195 in 1972, two of the groups of jobs had been run on a CDC 7600 by CDC personnel in Arden Hills, Minnesota, on an IBM 360/195 by IBM personnel in Poughkeepsie, New York, and on the AMD 360/50/75 production system in June, 1971. 6 figures, 9 tables.
A comparison of estimated and calculated effective porosity
Stephens, Daniel B.; Hsu, Kuo-Chin; Prieksat, Mark A.; Ankeny, Mark D.; Blandford, Neil; Roth, Tracy L.; Kelsey, James A.; Whitworth, Julia R.
Effective porosity in solute-transport analyses is usually estimated rather than calculated from tracer tests in the field or laboratory. Calculated values of effective porosity in the laboratory on three different textured samples were compared to estimates derived from particle-size distributions and soil-water characteristic curves. The agreement was poor and it seems that no clear relationships exist between effective porosity calculated from laboratory tracer tests and effective porosity estimated from particle-size distributions and soil-water characteristic curves. A field tracer test in a sand-and-gravel aquifer produced a calculated effective porosity of approximately 0.17. By comparison, estimates of effective porosity from textural data, moisture retention, and published values were approximately 50-90% greater than the field calibrated value. Thus, estimation of effective porosity for chemical transport is highly dependent on the chosen transport model and is best obtained by laboratory or field tracer tests. Résumé La porosité effective dans les analyses de transport de soluté est habituellement estimée, plutôt que calculée à partir d'expériences de traçage sur le terrain ou au laboratoire. Les valeurs calculées de la porosité effective au laboratoire sur trois échantillons de textures différentes ont été comparées aux estimations provenant de distributions de taille de particules et de courbes caractéristiques sol-eau. La concordance était plutôt faible et il semble qu'il n'existe aucune relation claire entre la porosité effective calculée à partir des expériences de traçage au laboratoire et la porosité effective estimée à partir des distributions de taille de particules et de courbes caractéristiques sol-eau. Une expérience de traçage de terrain dans un aquifère de sables et de graviers a fourni une porosité effective calculée d'environ 0,17. En comparaison, les estimations de porosité effective de données de
Buiter, S.; Schreurs, G.; Geomod2008 Team
2010-12-01
When numerical and analogue models are used to investigate the evolution of deformation processes in crust and lithosphere, they face specific challenges related to, among others, large contrasts in material properties, the heterogeneous character of continental lithosphere, the presence of a free surface, the occurrence of large deformations including viscous flow and offset on shear zones, and the observation that several deformation mechanisms may be active simultaneously. These pose specific demands on numerical software and laboratory models. By combining the two techniques, we can utilize the strengths of each individual method and test the model-independence of our results. We can perhaps even consider our findings to be more robust if we find similar-to-same results irrespective of the modeling method that was used. To assess the role of modeling method and to quantify the variability among models with identical setups, we have performed a direct comparison of results of 11 numerical codes and 15 analogue experiments. We present three experiments that describe shortening of brittle wedges and that resemble setups frequently used by especially analogue modelers. Our first experiment translates a non-accreting wedge with a stable surface slope. In agreement with critical wedge theory, all models maintain their surface slope and do not show internal deformation. This experiment serves as a reference that allows for testing against analytical solutions for taper angle, root-mean-square velocity and gravitational rate of work. The next two experiments investigate an unstable wedge, which deforms by inward translation of a mobile wall. The models accommodate shortening by formation of forward and backward shear zones. We compare surface slope, rate of dissipation of energy, root-mean-square velocity, and the location, dip angle and spacing of shear zones. All models show similar cross-sectional evolutions that demonstrate reproducibility to first order. However
Kim, Do Heon; Gil, Choong-Sup; Chang, Jonghwa; Lee, Yong-Deok
2005-05-01
The neutron absorption cross sections for 18 fission products evaluated within the framework of the KAERI (Korea Atomic Energy Research Institute)-BNL (Brookhaven National Laboratory) international collaboration have been compared with ENDF/B-VI.7. Also, the influence of the new evaluations on the isotopic composition calculations of the fission products has been estimated through the OECD/NEA burnup credit criticality benchmarks (Phase 1B) and the LWR/Pu recycling benchmarks. These calculations were performed by WIMSD-5B with the 69-group libraries prepared from three evaluated nuclear data libraries: ENDF/B-VI.7, ENDF/B-VI.8 including the new evaluations in the resonance region covering the thermal region, and the expected ENDF/B-VII including those in the upper resonance region up to 20 MeV. For Xe-131, the composition calculated with ENDF/B-VI.8 shows a maximum difference of 5.02% compared to ENDF/B-VI.7. However, the isotopic compositions of all the fission products calculated with the expected ENDF/B-VII show no differences when compared to ENDF/B-VI.7 for the thermal reactor benchmark cases.
Soil structure interaction calculations: a comparison of methods
International Nuclear Information System (INIS)
Wight, L.; Zaslawsky, M.
1976-01-01
Two approaches for calculating soil structure interaction (SSI) are compared: finite element and lumped mass. Results indicate that the calculations with the lumped mass method are generally conservative compared to those obtained by the finite element method. They also suggest that a closer agreement between the two sets of calculations is possible, depending on the use of frequency-dependent soil springs and dashpots in the lumped mass calculations. There is a total lack of suitable guidelines for implementing the lumped mass method of calculating SSI, which leads to the conclusion that the finite element method is generally superior for calculative purposes
DEFF Research Database (Denmark)
Agrell, Per J.; Bogetoft, Peter
2017-01-01
Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators...
DEFF Research Database (Denmark)
Lawson, Lartey; Nielsen, Kurt
2005-01-01
We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...
Comparisons of the MCNP criticality benchmark suite with ENDF/B-VI.8, JENDL-3.3, and JEFF-3.0
International Nuclear Information System (INIS)
Kim, Do Heon; Gil, Choong-Sup; Kim, Jung-Do; Chang, Jonghwa
2003-01-01
A comparative study has been performed with the latest evaluated nuclear data libraries ENDF/B-VI.8, JENDL-3.3, and JEFF-3.0. The study has been conducted through the benchmark calculations for 91 criticality problems with the libraries processed for MCNP4C. The calculation results have been compared with those of the ENDF60 library. The self-shielding effects of the unresolved-resonance (UR) probability tables have also been estimated for each library. The χ 2 differences between the MCNP results and experimental data were calculated for the libraries. (author)
International Nuclear Information System (INIS)
LeSage, L.G.; McKnight, R.D.; Wade, D.C.; Freese, K.E.; Collins, P.J.
1980-08-01
The results of an international comparison calculation of a large (1250 MWe) LMFBR benchmark model are presented and discussed. Eight reactor configurations were calculated. Parameters included with the comparison were: eigenvalue, k/sub infinity/, neutron balance data, breeding reaction rate ratios, reactivity worths, central control rod worth, regional sodium void reactivity, core Doppler and effective delayed neutron fraction. Ten countries participated in the comparison, and sixteen solutions were contributed. The discussion focuses on the variation in parameter values, the degree of consistency among the various parameters and solutions, and the identification of unexpected results. The results are displayed and discussed both by individual participants and by groupings of participants
International Nuclear Information System (INIS)
Kuosmanen, Timo; Saastamoinen, Antti; Sipiläinen, Timo
2013-01-01
Electricity distribution is a natural local monopoly. In many countries, the regulators of this sector apply frontier methods such as data envelopment analysis (DEA) or stochastic frontier analysis (SFA) to estimate the efficient cost of operation. In Finland, a new StoNED method was adopted in 2012. This paper compares DEA, SFA and StoNED in the context of regulating electricity distribution. Using data from Finland, we compare the impacts of methodological choices on cost efficiency estimates and acceptable cost. While the efficiency estimates are highly correlated, the cost targets reveal major differences. In addition, we examine performance of the methods by Monte Carlo simulations. We calibrate the data generation process (DGP) to closely match the empirical data and the model specification of the regulator. We find that the StoNED estimator yields a root mean squared error (RMSE) of 4% with the sample size 100. Precision improves as the sample size increases. The DEA estimator yields an RMSE of approximately 10%, but performance deteriorates as the sample size increases. The SFA estimator has an RMSE of 144%. The poor performance of SFA is due to the wrong functional form and multicollinearity. - Highlights: • We compare DEA, SFA and StoNED methods in the context of regulation of electricity distribution. • Both empirical comparisons and Monte Carlo simulations are presented. • Choice of benchmarking method has a significant economic impact on the regulatory outcomes. • StoNED yields the most precise results in the Monte Carlo simulations. • Five lessons concerning heterogeneity, noise, frontier, simulations, and implementation
Full sphere hydrodynamic and dynamo benchmarks
Marti, P.
2014-01-26
Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.
WWER-1000 Burnup Credit Benchmark (CB5)
International Nuclear Information System (INIS)
Manolova, M.A.
2002-01-01
In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)
Energy Technology Data Exchange (ETDEWEB)
Gerhard Strydom; Su-Jong Yoon
2014-04-01
Computational Fluid Dynamics (CFD) evaluation of homogeneous and heterogeneous fuel models was performed as part of the Phase I calculations of the International Atomic Energy Agency (IAEA) Coordinate Research Program (CRP) on High Temperature Reactor (HTR) Uncertainties in Modeling (UAM). This study was focused on the nominal localized stand-alone fuel thermal response, as defined in Ex. I-3 and I-4 of the HTR UAM. The aim of the stand-alone thermal unit-cell simulation is to isolate the effect of material and boundary input uncertainties on a very simplified problem, before propagation of these uncertainties are performed in subsequent coupled neutronics/thermal fluids phases on the benchmark. In many of the previous studies for high temperature gas cooled reactors, the volume-averaged homogeneous mixture model of a single fuel compact has been applied. In the homogeneous model, the Tristructural Isotropic (TRISO) fuel particles in the fuel compact were not modeled directly and an effective thermal conductivity was employed for the thermo-physical properties of the fuel compact. On the contrary, in the heterogeneous model, the uranium carbide (UCO), inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers of the TRISO fuel particles are explicitly modeled. The fuel compact is modeled as a heterogeneous mixture of TRISO fuel kernels embedded in H-451 matrix graphite. In this study, a steady-state and transient CFD simulations were performed with both homogeneous and heterogeneous models to compare the thermal characteristics. The nominal values of the input parameters are used for this CFD analysis. In a future study, the effects of input uncertainties in the material properties and boundary parameters will be investigated and reported.
A PWR Thorium Pin Cell Burnup Benchmark
Energy Technology Data Exchange (ETDEWEB)
Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.
2000-05-01
As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.
Comparison of linear microinstability calculations of varying input realism
International Nuclear Information System (INIS)
Rewoldt, G.; Kinsey, J.E.
2004-01-01
The effect of varying 'input realism' or varying completeness of the input data for linear microinstability calculations, in particular on the critical value of the ion temperature gradient for the ion temperature gradient mode, is investigated using gyrokinetic and gyrofluid approaches. The calculations show that varying input realism can have a substantial quantitative effect on the results
Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui
2004-01-01
A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks.
Comparison of MCNPX and Albedo method in criticality calculation
International Nuclear Information System (INIS)
Cunha, Victor L. Lassance; Rebello, Wilson F.; Cabral, Ronaldo G.; Melo, Fernando da S.; Silva, Ademir X. da
2009-01-01
This study aims to conduct a computer simulation that will calculate the reactivity of a homogeneous reactor and compare the results with the calculations made by the albedo method. The simulation will be developed using the MCNPX. The study compared the results calculated for a hypothetical reactor by the albedo method for four groups of energy with those obtained by the MCNPX simulation. The design of the reactor is spherical and homogeneous with a reflector of finite thickness. The value obtained for the neutron effective multiplication factor - k eff will be compared. Different situations were simulated in order to obtain results closer to the compared method and reality. The was Good consistency could be noticed between the calculated results. (author)
Comparison of different dose calculation methods for irregular photon fields
International Nuclear Information System (INIS)
Zakaria, G.A.; Schuette, W.
2000-01-01
In this work, 4 calculation methods (Wrede method, Clarskon method of sector integration, beam-zone method of Quast and pencil-beam method of Ahnesjoe) are introduced to calculate point doses in different irregular photon fields. The calculations cover a typical mantle field, an inverted Y-field and different blocked fields for 4 and 10 MV photon energies. The results are compared to those of measurements in a water phantom. The Clarkson and the pencil-beam method have been proved to be the methods of equal standard in relation to accuracy. Both of these methods are being distinguished by minimum deviations and applied in our clinical routine work. The Wrede and beam-zone methods deliver useful results to central beam and yet provide larger deviations in calculating points beyond the central axis. (orig.) [de
International Nuclear Information System (INIS)
Sekimoto, H.
1987-01-01
The kerma heat production density, tritum production density, and dose in a lithium-fluoride pile with a deuterium-tritum neutron source were calculated with a data processing code, UFO, from the pulse height distribution of a miniature NE213 neutron spectrometer, and compared with the values calculated with a Monte Carlo code, MORSE-CV. Both the UFO and MORSE-CV values agreed with the statistical error (less than 6%) of the MORSE-CV calculations, except for the outer-most point in the pile. The MORSE-CV values were slightly smaller than the UFO values for almost all cases, and this tendency increased with increasing distance from the neutron source
Comparison between ASHRAE and ISO thermal transmittance calculation methods
DEFF Research Database (Denmark)
Blanusa, Petar; Goss, William P.; Roth, Hartwig
2007-01-01
. The fenestration thermal transmittance calculations analyses presented in this paper show that small differences exist between the calculated thermal transmittance values produced by the ISO and ASHRAE methods. The results also show that the overall thermal transmittance difference between the two methodologies...... decreases as the total window area (glazing plus frame) increases. Thus, the resulting difference in thermal transmittance values for the two methods is negligible for larger windows. This paper also shows algebraically that the differences between the ISO and ASHRAE methods turn out to be due to the way......The intent of this paper is to describe and compare the two different two-dimensional frame/spacer heat transfer calculation methodologies used in North America (FRAME [EEL. The FRAMEplus Toolkit for Heat Transfer Assessment of Building Components, Version 3.0, Enermodal Engineering, Kichener...
International Nuclear Information System (INIS)
Dos Santos, Adimir; Siqueira, Paulo de Tarso D.; Andrade e Silva, Graciete Simões; Grant, Carlos; Tarazaga, Ariel E.; Barberis, Claudia
2013-01-01
In year 2008 the Atomic Energy National Commission (CNEA) of Argentina, and the Brazilian Institute of Energetic and Nuclear Research (IPEN), under the frame of Nuclear Energy Argentine Brazilian Agreement (COBEN), among many others, included the project “Validation and Verification of Calculation Methods used for Research and Experimental Reactors . At this time, it was established that the validation was to be performed with models implemented in the deterministic codes HUEMUL and PUMA (cell and reactor codes) developed by CNEA and those ones implemented in MCNP by CNEA and IPEN. The necessary data for these validations would correspond to theoretical-experimental reference cases in the research reactor IPEN/MB-01 located in São Paulo, Brazil. The staff of the group Reactor and Nuclear Power Studies (SERC) of CNEA, from the argentine side, performed calculations with deterministic models (HUEMUL-PUMA) and probabilistic methods (MCNP) modeling a great number of physical situations of de reactor, which previously have been studied and modeled by members of the Center of Nuclear Engineering of the IPEN, whose results were extensively provided to CNEA. In this paper results of comparison of calculated and experimental results for critical configurations, temperature coefficients, kinetic parameters and fission rates evaluated with probabilistic models spatial distributions are shown. (author)
Comparison of an exact and moments calculation of reliability
International Nuclear Information System (INIS)
Hockenbury, R.W.; Yeater, M.L.; Hawkins, J.M.; Wilkinson, J.W.
1976-01-01
Present methods for calculating the reliability of reactor systems usually assume constant failure rates for individual components of the system. In principle, if the uncertainty in component failure rates can be expressed in terms of a probability density function, then the probability density function for the overall system reliability can be obtained. The system reliability distribution can then be used to calculate confidence bounds, for example. The probability distribution for the system reliability can also be found by an approximate method, namely the method of moments. In order to compare the exact and approximate results, a simple two element series system is modeled
Large break LOCA uncertainty evaluation and comparison with conservative calculation
International Nuclear Information System (INIS)
Glaeser, H.G.
2004-01-01
The first formulation of the USA Code of Federal Regulations (CFR) 10CFR50 with applicable sections specific to NPP licensing requirements was released 1976. Over a decade later 10CFR 50.46 allowed the use of BE codes instead of conservative code models but uncertainties have to be identified and quantified. Guidelines were released that described interpretations developed over the intervening years that are applicable. Other countries established similar conservative procedures and acceptance criteria. Because conservative methods were used to calculate the peak values of key parameters, such as peak clad temperature (PCT), it was always acknowledged that a large margin, between the 'conservative' calculated value and the 'true' value, existed. Beside USA, regulation in other countries, like Germany, for example, allowed that the state of science and technology is applied in licensing. I.e. the increase of experimental evidence and progress in code development during time could be used. There was no requirement to apply a pure evaluation methodology with licensed assumptions and frozen codes. The thermal-hydraulic system codes became more and more best-estimate codes based on comprehensive validation. This development was and is possible because the rules and guidelines provide the necessary latitude to consider further development of safety technology. Best estimate codes are allowed to be used in licensing in combination with conservative initial and boundary conditions. However, uncertainty quantification is not required. Since some of the initial and boundary conditions are more conservative compared with those internationally used (e.g. 106% reactor power instead 102%, a single failure plus a non-availability due to preventive maintenance is assumed, etc.) it is claimed that the uncertainties of code models are covered. Since many utilities apply for power increase, calculation results come closer to some licensing criteria. The situation in German licensing
Comparison of electrical conductivity calculation methods for natural waters
McCleskey, R. Blaine; Nordstrom, D. Kirk; Ryan, Joseph N.
2012-01-01
The capability of eleven methods to calculate the electrical conductivity of a wide range of natural waters from their chemical composition was investigated. A brief summary of each method is presented including equations to calculate the conductivities of individual ions, the ions incorporated, and the method's limitations. The ability of each method to reliably predict the conductivity depends on the ions included, effective accounting of ion pairing, and the accuracy of the equation used to estimate the ionic conductivities. The performances of the methods were evaluated by calculating the conductivity of 33 environmentally important electrolyte solutions, 41 U.S. Geological Survey standard reference water samples, and 1593 natural water samples. The natural waters tested include acid mine waters, geothermal waters, seawater, dilute mountain waters, and river water impacted by municipal waste water. The three most recent conductivity methods predict the conductivity of natural waters better than other methods. Two of the recent methods can be used to reliably calculate the conductivity for samples with pH values greater than about 3 and temperatures between 0 and 40°C. One method is applicable to a variety of natural water types with a range of pH from 1 to 10, temperature from 0 to 95°C, and ionic strength up to 1 m.
Comparison of Calculation Models for Bucket Foundation in Sand
DEFF Research Database (Denmark)
Vaitkunaite, Evelina; Molina, Salvador Devant; Ibsen, Lars Bo
The possibility of fast and rather precise preliminary offshore foundation design is desirable. The ultimate limit state of bucket foundation is investigated using three different geotechnical calculation tools: [Ibsen 2001] an analytical method, LimitState:GEO and Plaxis 3D. The study has focuse...
Comparison for some measured and calculated nuclear Parameters ...
African Journals Online (AJOL)
The lattice code Winfrith Improved Multigroup Scheme (WIMS) was used to generate group constants for four energy groups and the diffusion code CITATION was used for a three-dimensional core calculations in order to compute neutron fluxes associated with various regions of the core. The computed thermal neutron flux ...
International Nuclear Information System (INIS)
Fischer, K.; Schall, M.; Wolf, L.
1993-01-01
The present final report comprises the major results of Phase II of the CEC thermal-hydraulic benchmark exercise on Fiploc verification experiment F2 in the Battelle model containment, experimental phases 2, 3 and 4, which was organized and sponsored by the Commission of the European Communities for the purpose of furthering the understanding and analysis of long-term thermal-hydraulic phenomena inside containments during and after severe core accidents. This benchmark exercise received high European attention with eight organizations from six countries participating with eight computer codes during phase 2. Altogether 18 results from computer code runs were supplied by the participants and constitute the basis for comparisons with the experimental data contained in this publication. This reflects both the high technical interest in, as well as the complexity of, this CEC exercise. Major comparison results between computations and data are reported on all important quantities relevant for containment analyses during long-term transients. These comparisons comprise pressure, steam and air content, velocities and their directions, heat transfer coefficients and saturation ratios. Agreements and disagreements are discussed for each participating code/institution, conclusions drawn and recommendations provided. The phase 2 CEC benchmark exercise provided an up-to-date state-of-the-art status review of the thermal-hydraulic capabilities of present computer codes for containment analyses. This exercise has shown that all of the participating codes can simulate the important global features of the experiment correctly, like: temperature stratification, pressure and leakage, heat transfer to structures, relative humidity, collection of sump water. Several weaknesses of individual codes were identified, and this may help to promote their development. As a general conclusion it may be said that while there is still a wide area of necessary extensions and improvements, the
Determination of hydrogen cluster velocities and comparison with numerical calculations
International Nuclear Information System (INIS)
Täschner, A.; Köhler, E.; Ortjohann, H.-W.; Khoukaz, A.
2013-01-01
The use of powerful hydrogen cluster jet targets in storage ring experiments led to the need of precise data on the mean cluster velocity as function of the stagnation temperature and pressure for the determination of the volume density of the target beams. For this purpose a large data set of hydrogen cluster velocity distributions and mean velocities was measured at a high density hydrogen cluster jet target using a trumpet shaped nozzle. The measurements have been performed at pressures above and below the critical pressure and for a broad range of temperatures relevant for target operation, e.g., at storage ring experiments. The used experimental method is described which allows for the velocity measurement of single clusters using a time-of-flight technique. Since this method is rather time-consuming and these measurements are typically interfering negatively with storage ring experiments, a method for a precise calculation of these mean velocities was needed. For this, the determined mean cluster velocities are compared with model calculations based on an isentropic one-dimensional van der Waals gas. Based on the obtained data and the presented numerical calculations, a new method has been developed which allows to predict the mean cluster velocities with an accuracy of about 5%. For this two cut-off parameters defining positions inside the nozzle are introduced, which can be determined for a given nozzle by only two velocity measurements
Comparison of matrix exponential methods for fuel burnup calculations
International Nuclear Information System (INIS)
Oh, Hyung Suk; Yang, Won Sik
1999-01-01
Series expansion methods to compute the exponential of a matrix have been compared by applying them to fuel depletion calculations. Specifically, Taylor, Pade, Chebyshev, and rational Chebyshev approximations have been investigated by approximating the exponentials of bum matrices by truncated series of each method with the scaling and squaring algorithm. The accuracy and efficiency of these methods have been tested by performing various numerical tests using one thermal reactor and two fast reactor depletion problems. The results indicate that all the four series methods are accurate enough to be used for fuel depletion calculations although the rational Chebyshev approximation is relatively less accurate. They also show that the rational approximations are more efficient than the polynomial approximations. Considering the computational accuracy and efficiency, the Pade approximation appears to be better than the other methods. Its accuracy is better than the rational Chebyshev approximation, while being comparable to the polynomial approximations. On the other hand, its efficiency is better than the polynomial approximations and is similar to the rational Chebyshev approximation. In particular, for fast reactor depletion calculations, it is faster than the polynomial approximations by a factor of ∼ 1.7. (author). 11 refs., 4 figs., 2 tabs
Benchmarking clinical photography services in the NHS.
Arbon, Giles
2015-01-01
Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.
Comparison between UMSICHT water hammer experiments and calculations using RELAP5/MOD3.3
International Nuclear Information System (INIS)
Messing, Ralf
2008-01-01
water hammer benchmark using RELAP5. In this experiment the water has a temperature T=147 C and a pressure p=10 bar, the steady flow velocity is around v ∼4 m/s before valve closure. In contrast to comparisons at lower temperatures where good agreement could be achieved between UMSICHT experiments and RELAP5- simulations, the amplitude for the first pressure peak was underestimated by RELAP5 for the hot-water experiment. Barten and al. attributed the deviations to deficiencies of the condensation and flashing models in RELAP5. Fluid-structure interactions, degassing of possibly dissolved air or unsteady friction have not been considered. The scope of this paper is to reevaluate the above-mentioned experiment 329 of the UMSICHT water hammer benchmark including degassing of dissolved air. For this purpose the UMSICHT test facility and the boundary conditions of experiment 329 are briefly described, results of the simulations using RELAP5/Mod3.3 are presented and conclusions are given in the following sections. (orig.)
Comparison of optimization methods for electronic-structure calculations
International Nuclear Information System (INIS)
Garner, J.; Das, S.G.; Min, B.I.; Woodward, C.; Benedek, R.
1989-01-01
The performance of several local-optimization methods for calculating electronic structure is compared. The fictitious first-order equation of motion proposed by Williams and Soler is integrated numerically by three procedures: simple finite-difference integration, approximate analytical integration (the Williams-Soler algorithm), and the Born perturbation series. These techniques are applied to a model problem for which exact solutions are known, the Mathieu equation. The Williams-Soler algorithm and the second Born approximation converge equally rapidly, but the former involves considerably less computational effort and gives a more accurate converged solution. Application of the method of conjugate gradients to the Mathieu equation is discussed
Comparison between calculation methods of dose rates in gynecologic brachytherapy
International Nuclear Information System (INIS)
Vianello, E.A.; Biaggio, M.F.; D R, M.F.; Almeida, C.E. de
1998-01-01
In treatments with radiations for gynecologic tumors is necessary to evaluate the quality of the results obtained by different calculation methods for the dose rates on the points of clinical interest (A, rectal, vesicle). The present work compares the results obtained by two methods. The Manual Calibration Method (MCM) tri dimensional (Vianello E., et.al. 1998), using orthogonal radiographs for each patient in treatment, and the Theraplan/T P-11 planning system (Thratonics International Limited 1990) this last one verified experimentally (Vianello et.al. 1996). The results show that MCM can be used in the physical-clinical practice with a percentile difference comparable at the computerized programs. (Author)
A control absorber rod in Phenix. Comparison of calculated and measured worths
International Nuclear Information System (INIS)
Humbert, G.; Petiot, R.; Coulon, P.
1982-09-01
For SPX1 design purposes, a so called ''SAC'' (complementary shut down system) rod has been set up at the center of the Phenix core. This gave the opportunity of making a design rod worth calculation check on this simple configuration. Calculations methods are shortly described and an experiment to calculation comparison is made. The type of agreement obtained is coherent with the uncertainty now applied to design rod worth calculation
Calculation and Comparison of Turbulence Attenuation by Different Methods
Directory of Open Access Journals (Sweden)
L. Dordova
2010-04-01
Full Text Available This paper is focused on the effect of turbulence in atmospheric transmission media. A short introduction of the origin of turbulent atmosphere is given. Two classical methods for the calculation of turbulence attenuation are mentioned and briefly described in the paper – the Rytov approximation and Andrews’s method, and a new technique designed by the authors of this paper – the method of available power – is presented in short as well. We have run simulations in Matlab to compare the results of turbulence attenuation given by the classical methods and by the new technique - the method of available power. These calculations were performed for communication wavelengths of 850 nm and 1550 nm. The distance between the optical transmitter and receiver of horizontal links was set to values ranging from 0 m to 2500 m. We have taken into account the homogenous turbulence with low (10^-16 m^-2/3, medium (10^-15 m^-2/3 and high (10^-14 m^-2/3 structure parameter of refractive index Cn2.
Fridkin, Scott K.; Lawton, Rachel; Edwards, Jonathan R.; Tenover, Fred C.; McGowan, John E.; Gaynes, Robert P.
2002-01-01
To determine if local monitoring data on vancomycin use directed quality improvement and decreased vancomycin use or vancomycin-resistant enterococci (VRE), we analyzed data from 50 intensive-care units (ICUs) at 20 U.S. hospitals reporting data on antimicrobial-resistant organisms and antimicrobial agent use. We compared local data with national benchmark data (aggregated from all study hospitals). After data were adjusted for changes in prevalence of methicillin-resistant Staphylococcus aur...
Comparison of two optical biometers in intraocular lens power calculation
Directory of Open Access Journals (Sweden)
Sheng Hui
2014-01-01
Full Text Available Aims: To compare the consistency and accuracy in ocular biometric measurements and intraocular lens (IOL power calculations using the new optical low-coherence reflectometry and partial coherence interferometry. Subjects and Methods: The clinical data of 122 eyes of 72 cataract patients were analyzed retrospectively. All patients were measured with a new optical low-coherence reflectometry system, using the LENSTAR LS 900 (Haag Streit AG/ALLEGRO BioGraph biometer (Wavelight., AG, and partial coherence interferometry (IOLMaster V.5.4 [Carl Zeiss., Meditec, AG] before phacoemulsification and IOL implantation. Repeated measurements, as recommended by the manufacturers, were performed by the same examiner with both devices. Using the parameters of axial length (AL, corneal refractive power (K1 and K2, and anterior chamber depth (ACD, power calculations for AcrySof SA60AT IOL were compared between the two devices using five formulas. The target was emmetropia. Statistical analysis was performed using Statistical Package for the Social Sciences software (SPSS 13.0 with t-test as well as linear regression. A P value < 0.05 was considered to be statistically significant. Results: The mean age of 72 cataract patients was 64.6 years ± 13.4 [standard deviation]. Of the biometry parameters, K1, K2 and [K1 + K2]/2 values were significantly different between the two devices (mean difference, K1: −0.05 ± 0.21 D; K2: −0.12 ± 0.20 D; [K1 + K2]/2: −0.08 ± 0.14 D. P <0.05. There was no statistically significant difference in AL and ACD between the two devices. The correlations of AL, K1, K2, and ACD between the two devices were high. The mean differences in IOL power calculations using the five formulas were not statistically significant between the two devices. Conclusions: New optical low-coherence reflectometry provides measurements that correlate well to those of partial coherence interferometry, thus it is a precise device that can be used for the
2007-10-01
frequenciesfoeahpbeswllsa"gdnsmtrc fo eah/Rbe. /Qthe acuation are de fiamn aprltmethod raetheorta cmiurve fTtn,wihe ies whynee select ful cycle wisdoimporat tob...See Figure 22 for a comparison of measured waves, linear waves, and non- linear Stokes waves. Looking at the selected 16 runs from the trough-to-peak...Figure 23 for the benchmark data set, the relation of obtained frequency verses desired frequency is almost completely linear . The slight variation at
Energy Technology Data Exchange (ETDEWEB)
Gissi, Andrea [Laboratory of Environmental Chemistry and Toxicology, IRCCS – Istituto di Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milano (Italy); Dipartimento di Farmacia – Scienze del Farmaco, Università degli Studi di Bari “Aldo Moro”, Via E. Orabona 4, 70125 Bari (Italy); Lombardo, Anna; Roncaglioni, Alessandra [Laboratory of Environmental Chemistry and Toxicology, IRCCS – Istituto di Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milano (Italy); Gadaleta, Domenico [Laboratory of Environmental Chemistry and Toxicology, IRCCS – Istituto di Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milano (Italy); Dipartimento di Farmacia – Scienze del Farmaco, Università degli Studi di Bari “Aldo Moro”, Via E. Orabona 4, 70125 Bari (Italy); Mangiatordi, Giuseppe Felice; Nicolotti, Orazio [Dipartimento di Farmacia – Scienze del Farmaco, Università degli Studi di Bari “Aldo Moro”, Via E. Orabona 4, 70125 Bari (Italy); Benfenati, Emilio, E-mail: emilio.benfenati@marionegri.it [Laboratory of Environmental Chemistry and Toxicology, IRCCS – Istituto di Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milano (Italy)
2015-02-15
The bioconcentration factor (BCF) is an important bioaccumulation hazard assessment metric in many regulatory contexts. Its assessment is required by the REACH regulation (Registration, Evaluation, Authorization and Restriction of Chemicals) and by CLP (Classification, Labeling and Packaging). We challenged nine well-known and widely used BCF QSAR models against 851 compounds stored in an ad-hoc created database. The goodness of the regression analysis was assessed by considering the determination coefficient (R{sup 2}) and the Root Mean Square Error (RMSE); Cooper's statistics and Matthew's Correlation Coefficient (MCC) were calculated for all the thresholds relevant for regulatory purposes (i.e. 100 L/kg for Chemical Safety Assessment; 500 L/kg for Classification and Labeling; 2000 and 5000 L/kg for Persistent, Bioaccumulative and Toxic (PBT) and very Persistent, very Bioaccumulative (vPvB) assessment) to assess the classification, with particular attention to the models' ability to control the occurrence of false negatives. As a first step, statistical analysis was performed for the predictions of the entire dataset; R{sup 2}>0.70 was obtained using CORAL, T.E.S.T. and EPISuite Arnot–Gobas models. As classifiers, ACD and log P-based equations were the best in terms of sensitivity, ranging from 0.75 to 0.94. External compound predictions were carried out for the models that had their own training sets. CORAL model returned the best performance (R{sup 2}{sub ext}=0.59), followed by the EPISuite Meylan model (R{sup 2}{sub ext}=0.58). The latter gave also the highest sensitivity on external compounds with values from 0.55 to 0.85, depending on the thresholds. Statistics were also compiled for compounds falling into the models Applicability Domain (AD), giving better performances. In this respect, VEGA CAESAR was the best model in terms of regression (R{sup 2}=0.94) and classification (average sensitivity>0.80). This model also showed the best
International Nuclear Information System (INIS)
Vasil'ev, S.A.; Dovganchuk, I.I.; Sozinov, Y.A.
1988-01-01
The laminar flow of a liquid metal in the clearance between rotating disks is examined in an axial magnetic field. A comparison is made between the experimental and calculated values of the potential difference
DEFF Research Database (Denmark)
Petersen, Jens Højslev; Hoekstra, Eddo J.
The EURL-NRL-FCM Taskforce on the Fourth Amendment of the Plastic Directive 2002/72/EC developed a calculator for the correction of the test results for comparison with the specific migration limit (SML). The calculator calculates the maximum acceptable specific migration under the given experime......The EURL-NRL-FCM Taskforce on the Fourth Amendment of the Plastic Directive 2002/72/EC developed a calculator for the correction of the test results for comparison with the specific migration limit (SML). The calculator calculates the maximum acceptable specific migration under the given...... experimental conditions in food or food stimulant and indicates whether the test result is in compliance with the legislation. This calculator includes the Fat Reduction Factor, the simulant D Reduction Factor and the factor of the difference in surface-to-volume ratio between test and real food contact....
International Nuclear Information System (INIS)
Hoffman, E.L.; Ammerman, D.J.
1993-01-01
A series of tests investigating dynamic pulse buckling of a cylindrical shell under axial impact is compared to several finite element simulations of the event. The purpose of the study is to compare the performance of the various analysis codes and element types with respect to a problem which is applicable to radioactive material transport packages, and ultimately to develop a benchmark problem to qualify finite element analysis codes for the transport package design industry
2016-06-10
Under the Medicare Shared Savings Program (Shared Savings Program), providers of services and suppliers that participate in an Accountable Care Organization (ACO) continue to receive traditional Medicare fee-for-service (FFS) payments under Parts A and B, but the ACO may be eligible to receive a shared savings payment if it meets specified quality and savings requirements. This final rule addresses changes to the Shared Savings Program, including: Modifications to the program's benchmarking methodology, when resetting (rebasing) the ACO's benchmark for a second or subsequent agreement period, to encourage ACOs' continued investment in care coordination and quality improvement; an alternative participation option to encourage ACOs to enter performance-based risk arrangements earlier in their participation under the program; and policies for reopening of payment determinations to make corrections after financial calculations have been performed and ACO shared savings and shared losses for a performance year have been determined.
... it takes about 2 hours for the adult body to completely break down a single drink. Do not drive after drinking. For comparison, regular beer is 5% alcohol by volume (alc/vol), table wine is about 12% alc/vol, and straight 80-proof distilled spirits is 40% alc/vol. The percent ...
McGalliard, James
2008-01-01
This viewgraph presentation details the science and systems environments that NASA High End computing program serves. Included is a discussion of the workload that is involved in the processing for the Global Climate Modeling. The Goddard Earth Observing System Model, Version 5 (GEOS-5) is a system of models integrated using the Earth System Modeling Framework (ESMF). The GEOS-5 system was used for the Benchmark tests, and the results of the tests are shown and discussed. Tests were also run for the Cubed Sphere system, results for these test are also shown.
Saini, Subhash; Frumkin, Michael; Hribar, Michelle; Jin, Hao-Qiang; Waheed, Abdul; Yan, Jerry
1998-01-01
Porting applications to new high performance parallel and distributed computing platforms is a challenging task. Since writing parallel code by hand is extremely time consuming and costly, porting codes would ideally be automated by using some parallelization tools and compilers. In this paper, we compare the performance of the hand written NAB Parallel Benchmarks against three parallel versions generated with the help of tools and compilers: 1) CAPTools: an interactive computer aided parallelization too] that generates message passing code, 2) the Portland Group's HPF compiler and 3) using compiler directives with the native FORTAN77 compiler on the SGI Origin2000.
Neil, Amanda; Pfeffer, Sally; Burnett, Leslie
2013-01-01
This paper details the development of a new type of pathology laboratory productivity unit, the benchmarking complexity unit (BCU). The BCU provides a comparative index of laboratory efficiency, regardless of test mix. It also enables estimation of a measure of how much complex pathology a laboratory performs, and the identification of peer organisations for the purposes of comparison and benchmarking. The BCU is based on the theory that wage rates reflect productivity at the margin. A weighting factor for the ratio of medical to technical staff time was dynamically calculated based on actual participant site data. Given this weighting, a complexity value for each test, at each site, was calculated. The median complexity value (number of BCUs) for that test across all participating sites was taken as its complexity value for the Benchmarking in Pathology Program. The BCU allowed implementation of an unbiased comparison unit and test listing that was found to be a robust indicator of the relative complexity for each test. Employing the BCU data, a number of Key Performance Indicators (KPIs) were developed, including three that address comparative organisational complexity, analytical depth and performance efficiency, respectively. Peer groups were also established using the BCU combined with simple organisational and environmental metrics. The BCU has enabled productivity statistics to be compared between organisations. The BCU corrects for differences in test mix and workload complexity of different organisations and also allows for objective stratification into peer groups.
Alloui, Mebarka; Belaidi, Salah; Othmani, Hasna; Jaidane, Nejm-Eddine; Hochlaf, Majdi
2018-03-01
We performed benchmark studies on the molecular geometry, electron properties and vibrational analysis of imidazole using semi-empirical, density functional theory and post Hartree-Fock methods. These studies validated the use of AM1 for the treatment of larger systems. Then, we treated the structural, physical and chemical relationships for a series of imidazole derivatives acting as angiotensin II AT1 receptor blockers using AM1. QSAR studies were done for these imidazole derivatives using a combination of various physicochemical descriptors. A multiple linear regression procedure was used to design the relationships between molecular descriptor and the activity of imidazole derivatives. Results validate the derived QSAR model.
The Isprs Benchmark on Indoor Modelling
Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.
2017-09-01
Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.
Handbook of critical experiments benchmarks
International Nuclear Information System (INIS)
Durst, B.M.; Bierman, S.R.; Clayton, E.D.
1978-03-01
Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input
Fokas, Emmanouil; Clifford, Charlotte; Spezi, Emiliano; Joseph, George; Branagan, Jennifer; Hurt, Chris; Nixon, Lisette; Abrams, Ross; Staffurth, John; Mukherjee, Somnath
2015-12-01
To evaluate the variation in investigator-delineated volumes and assess plans from the radiotherapy trial quality assurance (RTTQA) program of SCALOP, a phase II trial in locally advanced pancreatic cancer. Participating investigators (n=25) outlined a pre-trial benchmark case as per RT protocol, and the accuracy of investigators' GTV (iGTV) and PTV (iPTV) was evaluated, against the trials team-defined gold standard GTV (gsGTV) and PTV (gsPTV), using both qualitative and geometric analyses. The median Jaccard Conformity Index (JCI) and Geographical Miss Index (GMI) were calculated. Participating RT centers also submitted a radiotherapy plan for this benchmark case, which was centrally reviewed against protocol-defined constraints. Twenty-five investigator-defined contours were evaluated. The median JCI and GMI of iGTVs were 0.57 (IQR: 0.51-0.65) and 0.26 (IQR: 0.15-0.40). For iPTVs, these were 0.75 (IQR: 0.71-0.79) and 0.14 (IQR: 0.11-0.22) respectively. Qualitative analysis showed largest variation at the tumor edges and failure to recognize a peri-pancreatic lymph node. There were no major protocol deviations in RT planning, but three minor PTV coverage deviations were identified. . SCALOP demonstrated considerable variation in iGTV delineation. RTTQA workshops and real-time central review of delineations are needed in future trials. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Benchmarking of energy time series
Energy Technology Data Exchange (ETDEWEB)
Williamson, M.A.
1990-04-01
Benchmarking consists of the adjustment of time series data from one source in order to achieve agreement with similar data from a second source. The data from the latter source are referred to as the benchmark(s), and often differ in that they are observed at a lower frequency, represent a higher level of temporal aggregation, and/or are considered to be of greater accuracy. This report provides an extensive survey of benchmarking procedures which have appeared in the statistical literature, and reviews specific benchmarking procedures currently used by the Energy Information Administration (EIA). The literature survey includes a technical summary of the major benchmarking methods and their statistical properties. Factors influencing the choice and application of particular techniques are described and the impact of benchmark accuracy is discussed. EIA applications and procedures are reviewed and evaluated for residential natural gas deliveries series and coal production series. It is found that the current method of adjusting the natural gas series is consistent with the behavior of the series and the methods used in obtaining the initial data. As a result, no change is recommended. For the coal production series, a staged approach based on a first differencing technique is recommended over the current procedure. A comparison of the adjustments produced by the two methods is made for the 1987 Indiana coal production series. 32 refs., 5 figs., 1 tab.
Comparison of ONETRAN calculations of electron beam dose profiles with Monte Carlo and experiment
International Nuclear Information System (INIS)
Garth, J.C.; Woolf, S.
1987-01-01
Electron beam dose profiles have been calculated using a multigroup, discrete ordinates solution of the Spencer-Lewis electron transport equation. This was accomplished by introducing electron transport cross-sections into the ONETRAN code in a simple manner. The authors' purpose is to ''benchmark'' this electron transport model and to demonstrate its accuracy and capabilities over the energy range from 30 keV to 20 MeV. Many of their results are compared with the extensive measurements and TIGER Monte Carlo data. In general the ONETRAN results are smoother, agree with TIGER within the statistical error of the Monte Carlo histograms and require about one tenth the running time of Monte Carlo
International Nuclear Information System (INIS)
Takada, Tomoyuki; Yoshiyama, Hiroshi; Miyoshi, Yoshinori; Katakura, Jun-ichi
2003-01-01
Criticality safety evaluation code system JACS was developed by JAERI. Its accuracy evaluation was performed in 1980's. Although the evaluation of JACS was performed for various critical systems, the comparisons with continuous energy Monte Carlo code were not performed because such code was not developed those days. The comparisons are presented in this paper about the heterogeneous and homogeneous system containing U+Pu nitrate solutions. (author)
Directory of Open Access Journals (Sweden)
H. Groessing
2015-02-01
Full Text Available A benchmark study for permeability measurement is presented. In the past studies of other research groups which focused on the reproducibility of 1D-permeability measurements showed high standard deviations of the gained permeability values (25%, even though a defined test rig with required specifications was used. Within this study, the reproducibility of capacitive in-plane permeability testing system measurements was benchmarked by comparing results of two research sites using this technology. The reproducibility was compared by using a glass fibre woven textile and carbon fibre non crimped fabric (NCF. These two material types were taken into consideration due to the different electrical properties of glass and carbon with respect to dielectric capacitive sensors of the permeability measurement systems. In order to determine the unsaturated permeability characteristics as function of fibre volume content the measurements were executed at three different fibre volume contents including five repetitions. It was found that the stability and reproducibility of the presentedin-plane permeability measurement system is very good in the case of the glass fibre woven textiles. This is true for the comparison of the repetition measurements as well as for the comparison between the two different permeameters. These positive results were confirmed by a comparison to permeability values of the same textile gained with an older generation permeameter applying the same measurement technology. Also it was shown, that a correct determination of the grammage and the material density are crucial for correct correlation of measured permeability values and fibre volume contents.
Energy Technology Data Exchange (ETDEWEB)
Kamph, Jerome Henri; Robinson, Darren; Wetter, Michael
2009-09-01
There is an increasing interest in the use of computer algorithms to identify combinations of parameters which optimise the energy performance of buildings. For such problems, the objective function can be multi-modal and needs to be approximated numerically using building energy simulation programs. As these programs contain iterative solution algorithms, they introduce discontinuities in the numerical approximation to the objective function. Metaheuristics often work well for such problems, but their convergence to a global optimum cannot be established formally. Moreover, different algorithms tend to be suited to particular classes of optimization problems. To shed light on this issue we compared the performance of two metaheuristics, the hybrid CMA-ES/HDE and the hybrid PSO/HJ, in minimizing standard benchmark functions and real-world building energy optimization problems of varying complexity. From this we find that the CMA-ES/HDE performs well on more complex objective functions, but that the PSO/HJ more consistently identifies the global minimum for simpler objective functions. Both identified similar values in the objective functions arising from energy simulations, but with different combinations of model parameters. This may suggest that the objective function is multi-modal. The algorithms also correctly identified some non-intuitive parameter combinations that were caused by a simplified control sequence of the building energy system that does not represent actual practice, further reinforcing their utility.
International Nuclear Information System (INIS)
Qiu Qing-Lai; Xiao Bing-Jia; Guo Yong; Liu Lei; Wang Yue-Hang
2017-01-01
Vertical displacement event (VDE) is a big challenge to the existing tokamak equipment and that being designed. As a Chinese next-step tokamak, the Chinese Fusion Engineering Test Reactor (CFETR) has to pay attention to the VDE study with full-fledged numerical codes during its conceptual design. The tokamak simulation code (TSC) is a free boundary time-dependent axisymmetric tokamak simulation code developed in PPPL, which advances the MHD equations describing the evolution of the plasma in a rectangular domain. The electromagnetic interactions between the surrounding conductor circuits and the plasma are solved self-consistently. The TokSys code is a generic modeling and simulation environment developed in GA. Its RZIP model treats the plasma as a fixed spatial distribution of currents which couple with the surrounding conductors through circuit equations. Both codes have been individually used for the VDE study on many tokamak devices, such as JT-60U, EAST, NSTX, DIII-D, and ITER. Considering the model differences, benchmark work is needed to answer whether they reproduce each other’s results correctly. In this paper, the TSC and TokSys codes are used for analyzing the CFETR vertical instability passive and active controls design simultaneously. It is shown that with the same inputs, the results from these two codes conform with each other. (paper)
International Nuclear Information System (INIS)
Theussl, L.; Noguera, S.; Amghar, A.; Desplanques, B.
2003-01-01
The effect of different boost expressions, pertinent to the instant, front and point forms of relativistic quantum mechanics, is considered for the calculation of the ground-state form factor of a two-body system in simple scalar models. Results with a Galilean boost as well as an explicitly covariant calculation based on the Bethe-Salpeter approach are given for comparison. It is found that the present so-called point-form calculations of form factors strongly deviate from all the other ones. This suggests that the formalism which underlies them requires further elaboration. A proposition in this sense is made. (author)
California commercial building energy benchmarking
Energy Technology Data Exchange (ETDEWEB)
Kinney, Satkartar; Piette, Mary Ann
2003-07-01
Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the
Energy Technology Data Exchange (ETDEWEB)
Chrysos, Michael, E-mail: michel.chrysos@univ-angers.fr; Rachet, Florent [LUNAM Université, Université d’Angers, CNRS UMR 6200, Laboratoire MOLTECH-Anjou, 2 Bd Lavoisier, 49045 Angers (France); Dixneuf, Sophie [Centre du Commissariat à l’Énergie Atomique de Grenoble, Laboratoire CEA-bioMérieux, Bât 40.20, 17 rue des Martyrs, 38054 Grenoble (France)
2015-07-14
This is the long-overdue answer to the discrepancies observed between theory and experiment in Ar{sub 2} regarding both the isotropic Raman spectrum and the second refractivity virial coefficient, B{sub R} [Gaye et al., Phys. Rev. A 55, 3484 (1997)]. At the origin of this progress is the advent (posterior to 1997) of advanced computational methods for weakly interconnected neutral species at close separations. Here, we report agreement between the previously taken Raman measurements and quantum lineshapes now computed with the employ of large-scale CCSD or smartly constructed MP2 induced-polarizability data. By using these measurements as a benchmark tool, we assess the degree of performance of various other ab initio computed data for the mean polarizability α, and we show that an excellent agreement with the most recently measured value of B{sub R} is reached. We propose an even more refined model for α, which is solution of the inverse-scattering problem and whose lineshape matches exactly the measured spectrum over the entire frequency-shift range probed.
Berger, E.; Brenne, T.; Heath, A.; Hochholdinger, B.; Kassem-Manthey, K.; Keßler, L.; Koch, N.; Kortmann, G.; Kröff, A.; Otto, T.; Steinbeck, G.; Till, E.; Verhoeven, H.; Vu, T.-C.; Wiegand, K.
2005-08-01
To increase the accuracy of finite element simulations in daily practice the local German and Austrian Deep Drawing Research Groups of IDDRG founded a special Working Group in year 2000. The main objective of this group was the continuously ongoing study and discussion of numerical / material effects in simulation jobs and to work out possible solutions. As a first theme of this group the intensive study of small die radii and the possibility of detecting material failure in these critical forming positions was selected. The part itself is a fictional body panel outside in which the original door handle of the VW Golf A4 has been constructed, a typical position of possible material necking or rupture in the press shop. All conditions to do a successful simulation have been taken care of in advance, material data, boundary conditions, friction, FLC and others where determined for the two materials in investigation — a mild steel and a dual phase steel HXT500X. The results of the experiments have been used to design the descriptions of two different benchmark runs for the simulation. The simulations with different programs as well as with different parameters showed on one hand negligible and on the other hand parameters with strong impact on the result — thereby having a different impact on a possible material failure prediction.
Some results of Krsko NPP core calculations and comparison with measurements
International Nuclear Information System (INIS)
Trkov, A.; Zefran, B.; Kromar, M.; Ravnik, M.; Slavic, S.
1996-01-01
Current status of the CORD-2 package is described. Results of the predictions of some important reactor core parameters are presented for the 12 th operation cycle of the Krsko NPP. Comparison with measurements is made to illustrate that the accuracy of the calculations is acceptable. Some comments are made on the enhancements, which are currently being implemented on the package. (author)
Marchese Robinson, Richard L; Palczewska, Anna; Palczewski, Jan; Kidley, Nathan
2017-08-28
The ability to interpret the predictions made by quantitative structure-activity relationships (QSARs) offers a number of advantages. While QSARs built using nonlinear modeling approaches, such as the popular Random Forest algorithm, might sometimes be more predictive than those built using linear modeling approaches, their predictions have been perceived as difficult to interpret. However, a growing number of approaches have been proposed for interpreting nonlinear QSAR models in general and Random Forest in particular. In the current work, we compare the performance of Random Forest to those of two widely used linear modeling approaches: linear Support Vector Machines (SVMs) (or Support Vector Regression (SVR)) and partial least-squares (PLS). We compare their performance in terms of their predictivity as well as the chemical interpretability of the predictions using novel scoring schemes for assessing heat map images of substructural contributions. We critically assess different approaches for interpreting Random Forest models as well as for obtaining predictions from the forest. We assess the models on a large number of widely employed public-domain benchmark data sets corresponding to regression and binary classification problems of relevance to hit identification and toxicology. We conclude that Random Forest typically yields comparable or possibly better predictive performance than the linear modeling approaches and that its predictions may also be interpreted in a chemically and biologically meaningful way. In contrast to earlier work looking at interpretation of nonlinear QSAR models, we directly compare two methodologically distinct approaches for interpreting Random Forest models. The approaches for interpreting Random Forest assessed in our article were implemented using open-source programs that we have made available to the community. These programs are the rfFC package ( https://r-forge.r-project.org/R/?group_id=1725 ) for the R statistical
Benchmark of PENELOPE code for low-energy photon transport: dose comparisons with MCNP4 and EGS4
International Nuclear Information System (INIS)
Ye, Sung-Joon; Brezovich, Ivan A; Pareek, Prem; Naqvi, Shahid A
2004-01-01
The expanding clinical use of low-energy photon emitting 125 I and 103 Pd seeds in recent years has led to renewed interest in their dosimetric properties. Numerous papers pointed out that higher accuracy could be obtained in Monte Carlo simulations by utilizing newer libraries for the low-energy photon cross-sections, such as XCOM and EPDL97. The recently developed PENELOPE 2001 Monte Carlo code is user friendly and incorporates photon cross-section data from the EPDL97. The code has been verified for clinical dosimetry of high-energy electron and photon beams, but has not yet been tested at low energies. In the present work, we have benchmarked the PENELOPE code for 10-150 keV photons. We computed radial dose distributions from 0 to 10 cm in water at photon energies of 10-150 keV using both PENELOPE and MCNP4C with either DLC-146 or DLC-200 cross-section libraries, assuming a point source located at the centre of a 30 cm diameter and 20 cm length cylinder. Throughout the energy range of simulated photons (except for 10 keV), PENELOPE agreed within statistical uncertainties (at worst ±5%) with MCNP/DLC-146 in the entire region of 1-10 cm and with published EGS4 data up to 5 cm. The dose at 1 cm (or dose rate constant) of PENELOPE agreed with MCNP/DLC-146 and EGS4 data within approximately ±2% in the range of 20-150 keV, while MCNP/DLC-200 produced values up to 9% lower in the range of 20-100 keV than PENELOPE or the other codes. However, the differences among the four datasets became negligible above 100 keV
Directory of Open Access Journals (Sweden)
Lopez Rebeca A
2010-01-01
Full Text Available Abstract Background Because California has higher managed care penetration and the race/ethnicity of Californians differs from the rest of the United States, we tested the hypothesis that California's lower health plan Consumer Assessment of Healthcare Providers and Systems (CAHPS® survey results are attributable to the state's racial/ethnic composition. Methods California CAHPS survey responses for commercial health plans were compared to national responses for five selected measures: three global ratings of doctor, health plan and health care, and two composite scores regarding doctor communication and staff courtesy, respect, and helpfulness. We used the 2005 National CAHPS 3.0 Benchmarking Database to assess patient experiences of care. Multiple stepwise logistic regression was used to see if patient experience ratings based on CAHPS responses in California commercial health plans differed from all other states combined. Results CAHPS patient experience responses in California were not significantly different than the rest of the nation after adjusting for age, general health rating, individual health plan, education, time in health plan, race/ethnicity, and gender. Both California and national patient experience scores varied by race/ethnicity. In both California and the rest of the nation Blacks tended to be more satisfied, while Asians were less satisfied. Conclusions California commercial health plan enrollees rate their experiences of care similarly to enrollees in the rest of the nation when seven different variables including race/ethnicity are considered. These findings support accounting for more than just age, gender and general health rating before comparing health plans from one state to another. Reporting on race/ethnicity disparities in member experiences of care could raise awareness and increase accountability for reducing these racial and ethnic disparities.
International Nuclear Information System (INIS)
Kawai, Masayoshi
1984-01-01
Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)
Radiation Detection Computational Benchmark Scenarios
Energy Technology Data Exchange (ETDEWEB)
Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.
2013-09-24
Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for
Modeling of the ORNL PCA Benchmark Using SCALE6.0 Hybrid Deterministic-Stochastic Methodology
Directory of Open Access Journals (Sweden)
Mario Matijević
2013-01-01
Full Text Available Revised guidelines with the support of computational benchmarks are needed for the regulation of the allowed neutron irradiation to reactor structures during power plant lifetime. Currently, US NRC Regulatory Guide 1.190 is the effective guideline for reactor dosimetry calculations. A well known international shielding database SINBAD contains large selection of models for benchmarking neutron transport methods. In this paper a PCA benchmark has been chosen from SINBAD for qualification of our methodology for pressure vessel neutron fluence calculations, as required by the Regulatory Guide 1.190. The SCALE6.0 code package, developed at Oak Ridge National Laboratory, was used for modeling of the PCA benchmark. The CSAS6 criticality sequence of the SCALE6.0 code package, which includes KENO-VI Monte Carlo code, as well as MAVRIC/Monaco hybrid shielding sequence, was utilized for calculation of equivalent fission fluxes. The shielding analysis was performed using multigroup shielding library v7_200n47g derived from general purpose ENDF/B-VII.0 library. As a source of response functions for reaction rate calculations with MAVRIC we used international reactor dosimetry libraries (IRDF-2002 and IRDF-90.v2 and appropriate cross-sections from transport library v7_200n47g. The comparison of calculational results and benchmark data showed a good agreement of the calculated and measured equivalent fission fluxes.
Benchmarked Library Websites Comparative Study
Ramli, Rindra M.
2015-01-01
This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.
Aerodynamic Benchmarking of the Deepwind Design
DEFF Research Database (Denmark)
Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge
2015-01-01
The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...
Computational shielding benchmarks
International Nuclear Information System (INIS)
The American Nuclear Society Standards Committee 6.2.1 is engaged in the documentation of radiation transport problems and their solutions. The primary objective of this effort is to test computational methods used within the international shielding community. Dissemination of benchmarks will, it is hoped, accomplish several goals: (1) Focus attention on problems whose solutions represent state-of-the-art methodology for representative transport problems of generic interest; (2) Specification of standard problems makes comparisons of alternate computational methods, including use of approximate vs. ''exact'' computer codes, more meaningful; (3) Comparison with experimental data may suggest improvements in computer codes and/or associated data sets; (4) Test reliability of new methods as they are introduced for the solution of specific problems; (5) Verify user ability to apply a given computational method; and (6) Verify status of a computer program being converted for use on a different computer (e.g., CDC vs IBM) or facility
Systematic comparison of ISOLDE-SC yields with calculated in-target production rates
Energy Technology Data Exchange (ETDEWEB)
Lukic, S.; Gevaert, F.; Kelic, A.; Ricciardi, M.V.; Schmidt, K.H.; Yordanov, O.
2006-02-15
Recently, a series of dedicated inverse-kinematics experiments performed at GSI, Darmstadt, has brought an important progress in our understanding of proton and heavy-ion induced reactions at relativistic energies. The nuclear reaction code ABRABLA that has been developed and benchmarked against the results of these experiments has been used to calculate nuclide production cross sections at different energies and with different targets and beams. These calculations are used to estimate nuclide production rates by protons in thick targets, taking into account the energy loss and the attenuation of the proton beam in the target, as well as the low-energy fission induced by the secondary neutrons. The results are compared to the yields of isotopes of various elements obtained from different targets at CERN-ISOLDE with 600 MeV protons, and the overall extraction efficiencies are deduced. The dependence of these extraction efficiencies on the nuclide half-life is found to follow a simple pattern in many different cases. A simple function is proposed to parameterize this behavior in a way that quantifies the essential properties of the extraction efficiency for the element and the target - ion-source system in question. (orig.)
International benchmark on the natural convection test in Phenix reactor
International Nuclear Information System (INIS)
Tenchine, D.; Pialla, D.; Fanning, T.H.; Thomas, J.W.; Chellapandi, P.; Shvetsov, Y.; Maas, L.; Jeong, H.-Y.; Mikityuk, K.; Chenu, A.; Mochizuki, H.; Monti, S.
2013-01-01
Highlights: ► Phenix main characteristics, instrumentation and natural convection test are described. ► “Blind” calculations and post-test calculations from all the participants to the benchmark are compared to reactor data. ► Lessons learned from the natural convection test and the associated calculations are discussed. -- Abstract: The French Phenix sodium cooled fast reactor (SFR) started operation in 1973 and was stopped in 2009. Before the reactor was definitively shutdown, several final tests were planned and performed, including a natural convection test in the primary circuit. During this natural convection test, the heat rejection provided by the steam generators was disabled, followed several minutes later by reactor scram and coast-down of the primary pumps. The International Atomic Energy Agency (IAEA) launched a Coordinated Research Project (CRP) named “control rod withdrawal and sodium natural circulation tests performed during the Phenix end-of-life experiments”. The overall purpose of the CRP was to improve the Member States’ analytical capabilities in the field of SFR safety. An international benchmark on the natural convection test was organized with “blind” calculations in a first step, then “post-test” calculations and sensitivity studies compared with reactor measurements. Eight organizations from seven Member States took part in the benchmark: ANL (USA), CEA (France), IGCAR (India), IPPE (Russian Federation), IRSN (France), KAERI (Korea), PSI (Switzerland) and University of Fukui (Japan). Each organization performed computations and contributed to the analysis and global recommendations. This paper summarizes the findings of the CRP benchmark exercise associated with the Phenix natural convection test, including blind calculations, post-test calculations and comparisons with measured data. General comments and recommendations are pointed out to improve future simulations of natural convection in SFRs
Professional Performance and Bureaucratic Benchmarking Information
DEFF Research Database (Denmark)
Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz
Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...
International Nuclear Information System (INIS)
Yu, C.; Gnanapragasam, E.; Cheng, J.-J.; Biwer, B.
2006-01-01
The main purpose of this report is to document the benchmarking results and verification of the RESRAD-OFFSITE code as part of the quality assurance requirements of the RESRAD development program. This documentation will enable the U.S. Department of Energy (DOE) and its contractors, and the U.S. Nuclear Regulatory Commission (NRC) and its licensees and other stakeholders to use the quality-assured version of the code to perform dose analysis in a risk-informed and technically defensible manner to demonstrate compliance with the NRC's License Termination Rule, Title 10, Part 20, Subpart E, of the Code of Federal Regulations (10 CFR Part 20, Subpart E); DOE's 10 CFR Part 834, Order 5400.5, ''Radiation Protection of the Public and the Environment''; and other Federal and State regulatory requirements as appropriate. The other purpose of this report is to document the differences and similarities between the RESRAD (onsite) and RESRAD-OFFSITE codes so that users (dose analysts and risk assessors) can make a smooth transition from use of the RESRAD (onsite) code to use of the RESRAD-OFFSITE code for performing both onsite and offsite dose analyses. The evolution of the RESRAD-OFFSITE code from the RESRAD (onsite) code is described in Chapter 1 to help the dose analyst and risk assessor make a smooth conceptual transition from the use of one code to that of the other. Chapter 2 provides a comparison of the predictions of RESRAD (onsite) and RESRAD-OFFSITE for an onsite exposure scenario. Chapter 3 documents the results of benchmarking RESRAD-OFFSITE's atmospheric transport and dispersion submodel against the U.S. Environmental Protection Agency's (EPA's) CAP88-PC (Clean Air Act Assessment Package-1988) and ISCLT3 (Industrial Source Complex-Long Term) models. Chapter 4 documents the comparison results of the predictions of the RESRAD-OFFSITE code and its submodels with the predictions of peer models. This report was prepared by Argonne National Laboratory's (Argonne
Energy Technology Data Exchange (ETDEWEB)
Yu, C.; Gnanapragasam, E.; Cheng, J.-J.; Biwer, B.
2006-05-22
The main purpose of this report is to document the benchmarking results and verification of the RESRAD-OFFSITE code as part of the quality assurance requirements of the RESRAD development program. This documentation will enable the U.S. Department of Energy (DOE) and its contractors, and the U.S. Nuclear Regulatory Commission (NRC) and its licensees and other stakeholders to use the quality-assured version of the code to perform dose analysis in a risk-informed and technically defensible manner to demonstrate compliance with the NRC's License Termination Rule, Title 10, Part 20, Subpart E, of the Code of Federal Regulations (10 CFR Part 20, Subpart E); DOE's 10 CFR Part 834, Order 5400.5, ''Radiation Protection of the Public and the Environment''; and other Federal and State regulatory requirements as appropriate. The other purpose of this report is to document the differences and similarities between the RESRAD (onsite) and RESRAD-OFFSITE codes so that users (dose analysts and risk assessors) can make a smooth transition from use of the RESRAD (onsite) code to use of the RESRAD-OFFSITE code for performing both onsite and offsite dose analyses. The evolution of the RESRAD-OFFSITE code from the RESRAD (onsite) code is described in Chapter 1 to help the dose analyst and risk assessor make a smooth conceptual transition from the use of one code to that of the other. Chapter 2 provides a comparison of the predictions of RESRAD (onsite) and RESRAD-OFFSITE for an onsite exposure scenario. Chapter 3 documents the results of benchmarking RESRAD-OFFSITE's atmospheric transport and dispersion submodel against the U.S. Environmental Protection Agency's (EPA's) CAP88-PC (Clean Air Act Assessment Package-1988) and ISCLT3 (Industrial Source Complex-Long Term) models. Chapter 4 documents the comparison results of the predictions of the RESRAD-OFFSITE code and its submodels with the predictions of peer models. This report was prepared
International Nuclear Information System (INIS)
Lara, Rafael G.; Maiorino, Jose R.
2013-01-01
This work aimed at the implementation and qualification of MCNP code in a supercomputer of the Universidade Federal do ABC, so that may be available a next-generation simulation tool for precise calculations of nuclear reactors and systems subject to radiation. The implementation of this tool will have multidisciplinary applications, covering various areas of engineering (nuclear, aerospace, biomedical), radiation physics and others
Bahadori, Amir Alexander
Astronauts are exposed to a unique radiation environment in space. United States terrestrial radiation worker limits, derived from guidelines produced by scientific panels, do not apply to astronauts. Limits for astronauts have changed throughout the Space Age, eventually reaching the current National Aeronautics and Space Administration limit of 3% risk of exposure induced death, with an administrative stipulation that the risk be assured to the upper 95% confidence limit. Much effort has been spent on reducing the uncertainty associated with evaluating astronaut risk for radiogenic cancer mortality, while tools that affect the accuracy of the calculations have largely remained unchanged. In the present study, the impacts of using more realistic computational phantoms with size variability to represent astronauts with simplified deterministic radiation transport were evaluated. Next, the impacts of microgravity-induced body changes on space radiation dosimetry using the same transport method were investigated. Finally, dosimetry and risk calculations resulting from Monte Carlo radiation transport were compared with results obtained using simplified deterministic radiation transport. The results of the present study indicated that the use of phantoms that more accurately represent human anatomy can substantially improve space radiation dose estimates, most notably for exposures from solar particle events under light shielding conditions. Microgravity-induced changes were less important, but results showed that flexible phantoms could assist in optimizing astronaut body position for reducing exposures during solar particle events. Finally, little overall differences in risk calculations using simplified deterministic radiation transport and 3D Monte Carlo radiation transport were found; however, for the galactic cosmic ray ion spectra, compensating errors were observed for the constituent ions, thus exhibiting the need to perform evaluations on a particle
Verification of the code DYN3D/R with the help of international benchmarks
International Nuclear Information System (INIS)
Grundmann, U.; Rohde, U.
1997-10-01
Different benchmarks for reactors with quadratic fuel assemblies were calculated with the code DYN3D/R. In this report comparisons with the results of the reference solutions are carried out. The results of DYN3D/R and the reference calculation for the eigenvalue k eff and the power distribution are shown for the steady-state 3-dimensional IAEA-Benchmark. The results of NEACRP-Benchmarks on control rod ejections in a standard PWR were compared with the reference solutions published by the NEA Data Bank. For assessing the accuracy of DYN3D/R results in comparison to other codes the deviations to the reference solutions are considered. Detailed comparisons with the published reference solutions of the NEA-NSC Benchmarks on uncontrolled withdrawal of control rods are made. The influence of the axial nodalization is also investigated. All in all, a good agreement of the DYN3D/R results with the reference solutions can be seen for the considered benchmark problems. (orig.) [de
Directory of Open Access Journals (Sweden)
Jahn, Franziska
2015-08-01
Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.
Comparison of the calculated neutron noise using finite differences and the Analytical Nodal Method
International Nuclear Information System (INIS)
Larsson, Viktor; Demazière, Christophe
2012-01-01
Highlights: ► Numerical neutron noise calculations for a commercial PWR. ► Comparison using finite differences and the Analytical Nodal Method. ► Little gain for the higher cost of more advanced methods. ► Finite difference adequate for neutron noise calculations. - Abstract: In this paper, a comparison of the calculated neutron noise, i.e. the fluctuation of the neutron flux around its average value assuming that all processes are stationary, is conducted, where the neutron noise is calculated using finite differences alone and with finite differences where the Analytical Nodal Method is used to correct the neutron currents, respectively. It is seen that the lower the frequency of the noise source, the larger difference between the two solutions. The main conclusion from this work is that the gain of calculating the neutron noise using the more sophisticated Analytical Nodal Method compared to the increase of the corresponding computational burden is too little to motivate the use of the ANM.
TU Electric reactor physics model verification: Power reactor benchmark
International Nuclear Information System (INIS)
Willingham, C.E.; Killgore, M.R.
1988-01-01
Power reactor benchmark calculations using the advanced code package CASMO-3/SIMULATE-3 have been performed for six cycles of Prairie Island Unit 1. The reload fuel designs for the selected cycles included gadolinia as a burnable absorber, natural uranium axial blankets and increased water-to-fuel ratio. The calculated results for both startup reactor physics tests (boron endpoints, control rod worths, and isothermal temperature coefficients) and full power depletion results were compared to measured plant data. These comparisons show that the TU Electric reactor physics models accurately predict important measured parameters for power reactors
Professional Performance and Bureaucratic Benchmarking Information
DEFF Research Database (Denmark)
Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz
Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...
Analysis of VENUS-3 benchmark experiment
International Nuclear Information System (INIS)
Kodeli, I.; Sartori, E.
1998-01-01
The paper presents the revision and the analysis of VENUS-3 benchmark experiment performed at CEN/SCK, Mol (Belgium). This benchmark was found to be particularly suitable for validation of current calculation tools like 3-D neutron transport codes, and in particular of the 3D sensitivity and uncertainty analysis code developed within the EFF project. The compilation of the integral experiment was integrated into the SINBAD electronic data base for storing and retrieving information about the shielding experiments for nuclear systems. SINBAD now includes 33 reviewed benchmark descriptions and several compilations waiting for the review, among them many benchmarks relevant for pressure vessel dosimetry system validation.(author)
International Nuclear Information System (INIS)
Broadhead, B.L.; Brady, M.C.; Parks, C.V.
1990-11-01
In 1985, the Nuclear Energy Agency-Committee on Reactor Physics (NEACRP) established a working group on shielding assessment of transportation packages. Following the initial distribution of a set of six problems, discussions were held at the Organization for Economic Cooperation and Development (OECD) Headquarters in Paris, France, in June/July 1986, May 1988, and February/March 1990. The US contribution to the working group is documented in this report. The results from this effort permit the evaluation of a number of approximations and effects that must be considered in a typical shielding analysis of a transportation cask. Among the effects reported here are the performance of multiple cross-section sets, the comparison of several source generation codes, and multidimensional versus one-dimensional (1-D) analyses. 18 refs., 16 figs., 33 tabs
International Nuclear Information System (INIS)
Dos Santos, Adimir; Siqueira, Paulo de Tarso D.; Andrade e Silva, Graciete Simões; Grant, Carlos; Tarazaga, Ariel E.; Barberis, Claudia
2013-01-01
In year 2008 the Atomic Energy National Commission (CNEA) of Argentina, and the Brazilian Institute of Energetic and Nuclear Research (IPEN), under the frame of Nuclear Energy Argentine Brazilian Agreement (COBEN), among many others, included the project “Validation and Verification of Calculation Methods used for Research and Experimental Reactors . At this time, it was established that the validation was to be performed with models implemented in the deterministic codes HUEMUL and PUMA (cell and reactor codes) developed by CNEA and those ones implemented in MCNP by CNEA and IPEN. The necessary data for these validations would correspond to theoretical-experimental reference cases in the research reactor IPEN/MB-01 located in São Paulo, Brazil. The staff of the group Reactor and Nuclear Power Studies (SERC) of CNEA, from the argentine side, performed calculations with deterministic models (HUEMUL-PUMA) and probabilistic methods (MCNP) modeling a great number of physical situations of de reactor, which previously have been studied and modeled by members of the Center of Nuclear Engineering of the IPEN, whose results were extensively provided to CNEA. In this paper results for critical configurations are shown. (author)
Comparison of different methods of calculating CT radiation effective dose in children.
Newman, Beverley; Ganguly, Arundhuti; Kim, Jee-Eun; Robinson, Terry
2012-08-01
CT radiation dose is a subject of intense interest and concern, especially in children. Effective dose, a summation of whole-body exposure weighted by specific organ sensitivities, is most often used to compute and compare radiation dose; however, there is little standardization, and there are numerous different methods of calculating effective dose. This study compares five such methods in a group of children undergoing routine chest CT and explores their advantages and pitfalls. Patient data from 120 pediatric chest CT examinations were retrospectively used to calculate effective dose: two scanner dose-length product (DLP) methods using published sets of conversion factors by Shrimpton and Deak, the imaging performance and assessment of CT (ImPact) calculator method, the Alessio online calculator, and the Huda method. The Huda method mean effective dose (4.4 ± 2.2 mSv) and Alessio online calculator (5.2 ± 2.8 mSv) yielded higher mean numbers for effective dose than both DLP calculations (Shrimpton, 3.65 ± 1.8 mSv, and Deak, 3.2 ± 1.5 mSv) as well as the ImPact calculator effective dose (3.4 ± 1.7 mSv). Mean differences ranged from 10.2% ± 10.1% lower to 28% ± 37.3% higher than the Shrimpton method (used as the standard for comparison). Differences were more marked at 120 kVp than at 80 or 100 kVp and varied at different ages. Concordance coefficients relative to the Shrimpton DLP method were Deak DLP, 0.907; Alessio online calculator, 0.735; ImPact calculator, 0.926; and Huda, 0.777. Different methods of computing effective dose for pediatric CT produce varying results. The method used must be clearly described to allay confusion about documenting and communicating dose for archiving as well as comparative research purposes.
Comparison of leak opening and leak rate calculations to HDR experimental results
International Nuclear Information System (INIS)
Grebner, H.; Hoefler, A.; Hunger, H.
1993-01-01
During the last years a number of calculations of leak opening and leak rate for through cracks in piping components have been performed. Analyses are pre- or mostly post-calculations to experiments performed at the HDR facility under PWR operating conditions. Piping components under consideration were small diameter straight pipes with circumferential cracks, pipe bends with longitudinal or circumferential cracks and pipe branches with weldment cracks. The components were loaded by internal pressure and opening as well as closing bending moment. The finite element method and two-phase flow leak rate programs were used for the calculations. Results of the analyses are presented as J-integral values, crack opening displacements and areas and leak rates as well as comparisons to the experimental results
Shen, Jun; Piecuch, Piotr
2012-04-14
We have recently suggested the CC(P;Q) methodology that can correct energies obtained in the active-space coupled-cluster (CC) or equation-of-motion (EOM) CC calculations, which recover much of the nondynamical and some dynamical electron correlation effects, for the higher-order, mostly dynamical, correlations missing in the active-space CC/EOMCC considerations. It is shown that one can greatly improve the description of biradical transition states, both in terms of the resulting energy barriers and total energies, by combining the CC approach with singles, doubles, and active-space triples, termed CCSDt, with the CC(P;Q)-style correction due to missing triple excitations defining the CC(t;3) approximation.
Energy Technology Data Exchange (ETDEWEB)
Neymotin, L. [Brookhaven National Lab., Upton, NY (United States)
1994-04-01
Over the past several years, the OECD/NEA and CEC sponsored an international program intercomparing a group of six probabilistic consequence assessment (PCA) codes designed to simulate health and economic consequences of radioactive releases into atmosphere of radioactive materials following severe accidents at nuclear power plants (NPPs): ARANO (Finland), CONDOR (UK), COSYMA (CEC), LENA (Sweden), MACCS (USA), and OSCAAR (Japan). In parallel with this effort, two separate groups performed similar calculations using the MACCS and COSYMA codes. Results produced in the MACCS Users Group (Greece, Italy, Spain, and USA) calculations and their comparison are contained in the present report. Version 1.5.11.1 of the MACCS code was used for the calculations. Good agreement between the results produced in the four participating calculations has been reached, with the exception of the results related to the ingestion pathway dose predictions. The main reason for the scatter in those particular results is attributed to the lack of a straightforward implementation of the specifications for agricultural production and counter-measures criteria provided for the exercise. A significantly smaller scatter in predictions of other consequences was successfully explained by differences in meteorological files and weather sampling, grids, rain distance intervals, dispersion model options, and population distributions.
International Nuclear Information System (INIS)
Neymotin, L.
1994-04-01
Over the past several years, the OECD/NEA and CEC sponsored an international program intercomparing a group of six probabilistic consequence assessment (PCA) codes designed to simulate health and economic consequences of radioactive releases into atmosphere of radioactive materials following severe accidents at nuclear power plants (NPPs): ARANO (Finland), CONDOR (UK), COSYMA (CEC), LENA (Sweden), MACCS (USA), and OSCAAR (Japan). In parallel with this effort, two separate groups performed similar calculations using the MACCS and COSYMA codes. Results produced in the MACCS Users Group (Greece, Italy, Spain, and USA) calculations and their comparison are contained in the present report. Version 1.5.11.1 of the MACCS code was used for the calculations. Good agreement between the results produced in the four participating calculations has been reached, with the exception of the results related to the ingestion pathway dose predictions. The main reason for the scatter in those particular results is attributed to the lack of a straightforward implementation of the specifications for agricultural production and counter-measures criteria provided for the exercise. A significantly smaller scatter in predictions of other consequences was successfully explained by differences in meteorological files and weather sampling, grids, rain distance intervals, dispersion model options, and population distributions
U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...
Cohn, Steven L; Fernandez Ros, Nerea
2018-01-01
The 2014 American College of Cardiology/American Heart Association Perioperative Guidelines suggest using the Revised Cardiac Risk Index, myocardial infarction or cardiac arrest, or American College of Surgeons-National Surgical Quality Improvement Program calculators for combined patient-surgical risk assessment. There are no published data comparing their performance. This study compared these risk calculators and a reconstructed Revised Cardiac Risk Index in predicting postoperative cardiac complications, both during hospitalization and 30 days after operation, in a patient cohort who underwent select surgical procedures in various risk categories. Cardiac complications occurred in 14 of 663 patients (2.1%), of which 11 occurred during hospitalization. Only 3 of 663 patients (0.45%) had a myocardial infarction or cardiac arrest. Because these calculators used different risk factors, different outcomes, and different durations of observation, a true direct comparison is not possible. We found that all 4 risk calculators performed well in the setting they were originally studied but were less accurate when applied in a different manner. In conclusion, all calculators were useful in defining low-risk patients in whom further cardiac testing was unnecessary, and the myocardial infarction or cardiac arrest may be the most reliable in selecting higher risk patients. Copyright © 2017 Elsevier Inc. All rights reserved.
Marshall, C. J.; Marshall, P. W.; Howe, C. L.; Reed, R. A.; Weller, R. A.; Mendenhall, M.; Waczynski, A.; Ladbury, R.; Jordan, T. M.
2007-01-01
This paper presents a combined Monte Carlo and analytic approach to the calculation of the pixel-to-pixel distribution of proton-induced damage in a HgCdTe sensor array and compares the results to measured dark current distributions after damage by 63 MeV protons. The moments of the Coulombic, nuclear elastic and nuclear inelastic damage distributions were extracted from Monte Carlo simulations and combined to form a damage distribution using the analytic techniques first described in [1]. The calculations show that the high energy recoils from the nuclear inelastic reactions (calculated using the Monte Carlo code MCNPX [2]) produce a pronounced skewing of the damage energy distribution. While the nuclear elastic component (also calculated using the MCNPX) contributes only a small fraction of the total nonionizing damage energy, its inclusion in the shape of the damage across the array is significant. The Coulombic contribution was calculated using MRED [3-5], a Geant4 [4,6] application. The comparison with the dark current distribution strongly suggests that mechanisms which are not linearly correlated with nonionizing damage produced according to collision kinematics are responsible for the observed dark current increases. This has important implications for the process of predicting the on-orbit dark current response of the HgCdTe sensor array.
Calculated /alpha/-induced thick target neutron yields and spectra, with comparison to measured data
International Nuclear Information System (INIS)
Wilson, W.B.; Bozoian, M.; Perry, R.T.
1988-01-01
One component of the neutron source associated with the decay of actinide nuclides in many environments is due to the interaction of decay /alpha/ particles in (/alpha/,n) reactions on low Z nuclides. Measurements of (/alpha/,n) thick target neutron yields and associated neutron spectra have been made for only a few combinations of /alpha/ energy and target nuclide or mixtures of actinide and target nuclides. Calculations of thick target neutron yields and spectra with the SOURCES code require /alpha/-energy-dependent cross sections for (/alpha/,n) reactions, as well as branching fractions leading to the energetically possible levels of the product nuclides. A library of these data has been accumulated for target nuclides of Z /le/ 15 using that available from measurements and from recent GNASH code calculations. SOURCES, assuming neutrons to be emitted isotopically in the center-of-mass system, uses libraries of /alpha/ stopping cross sections, (/alpha/,n) reaction cross reactions, product nuclide level branching fractions, and actinide decay /alpha/ spectra to calculate thick target (/alpha/,n) yields and neutron spectra for homogeneous combinations of nuclides. The code also calculates the thick target yield and angle intergrated neutron spectrum produced by /alpha/-particle beams on targets of homogeneous mixtures of nuclides. Illustrative calculated results are given and comparisons are made with measured thick target yields and spectra. 50 refs., 1 fig., 2 tabs
Verification of RRC Ki code package for neutronic calculations of WWER core with GD
International Nuclear Information System (INIS)
Aleshin, S.S.; Bolshagin, S.N.; Lazarenko, A.P.; Markov, A.V.; Pavlov, V.I.; Pavlovitchev, A.M.; Sidorenko, V.D.; Tsvetkov, V.M.
2001-01-01
The report presented is concerned with verification results of TVS-M/PERMAK-A/BIPR-7A code package for WWERs neutronic calculation as applied to calculation of systems containing U-GD pins. The verification is based on corresponded benchmark calculations, data critical experiments and on operation data obtained WWER units with Gd. The comparison results are discussed (Authors)
Comparison of measured and calculated composition of irradiated EBR-II blanket assemblies
International Nuclear Information System (INIS)
Grimm, K. N.
1998-01-01
In anticipation of processing irradiated EBR-II depleted uranium blanket subassemblies in the Fuel Conditioning Facility (FCF) at ANL-West, it has been possible to obtain a limited set of destructive chemical analyses of samples from a single EBR-II blanket subassembly. Comparison of calculated values with these measurements is being used to validate a depletion methodology based on a limited number of generic models of EBR-II to simulate the irradiation history of these subassemblies. Initial comparisons indicate these methods are adequate to meet the operations and material control and accountancy (MC and A) requirements for the FCF, but also indicate several shortcomings which may be corrected or improved
International Nuclear Information System (INIS)
Gregersen, A.W.
1977-01-01
A comparison is made between matrix elements calculated using the uncoupled channel Sussex approach to second order in DWBA and matrix elements calculated using a square well potential. The square well potential illustrated the problem of the determining parameter independence balanced with the concept of phase shift difference. The super-soft core potential was used to discuss the systematics of the Sussex approach as a function of angular momentum as well as the relation between Sussex generated and effective interaction matrix elements. In the uncoupled channels the original Sussex method of extracting effective interaction matrix elements was found to be satisfactory. In the coupled channels emphasis was placed upon the 3 S 1 -- 3 D 1 coupled channel matrix elements. Comparison is made between exactly calculated matrix elements, and matrix elements derived using an extended formulation of the coupled channel Sussex method. For simplicity the potential used is a nonseparable cut-off oscillator. The eigenphases of this potential can be made to approximate the realistic nucleon--nucleon phase shifts at low energies. By using the cut-off oscillator test potential, the original coupled channel Sussex method of determining parameter independence was shown to be incapable of accurately reproducing the exact cut-off oscillator matrix elements. The extended Sussex method was found to be accurate to within 10 percent. The extended method is based upon more general coupled channel DWBA and a noninfinite oscillator wave function solution to the cut-off oscillator auxiliary potential. A comparison is made in the coupled channels between matrix elements generated using the original Sussex method and the extended method. Tables of matrix elements generated using the original uncoupled channel Sussex method and the extended coupled channel Sussex method are presented for all necessary angular momentum channels
Benchmarking semantic web technology
García-Castro, R
2009-01-01
This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:
Energy Technology Data Exchange (ETDEWEB)
Sunagawa, M.; Sasaki, A.; Igarashi, J.-E.; Nishimura, T. [Research Center, Sumitomo Pharmaceuticals Co., Ltd., 3-1-98 Kasugadenaka, Konohanaku, Osaka (Japan)
1998-04-01
Structural comparisons of meropenem (1), desmethyl meropenem (2) and the penem analogue (3) which contain the same side chains at both C-2 and C-6 were performed using {sup 1}H NMR measurements together with 3-21G* level of ab initio MO and molecular mechanics calculations. The ab initio MO calculations reproduced the skeletons of these strained {beta}-lactam rings in good agreement with the crystallographic data. {sup 1}H NMR measurements in aqueous solution together with molecular modeling studies indicated that there were conformational differences of the C-2 and C-6 side chains in this series of compounds. These observations suggested that the conformational differences could affect their biological activities. (Copyright (c) 1998 Elsevier Science B.V., Amsterdam. All rights reserved.)
International Nuclear Information System (INIS)
Sunagawa, M.; Sasaki, A.; Igarashi, J.-E.; Nishimura, T.
1998-01-01
Structural comparisons of meropenem (1), desmethyl meropenem (2) and the penem analogue (3) which contain the same side chains at both C-2 and C-6 were performed using 1 H NMR measurements together with 3-21G* level of ab initio MO and molecular mechanics calculations. The ab initio MO calculations reproduced the skeletons of these strained β-lactam rings in good agreement with the crystallographic data. 1 H NMR measurements in aqueous solution together with molecular modeling studies indicated that there were conformational differences of the C-2 and C-6 side chains in this series of compounds. These observations suggested that the conformational differences could affect their biological activities. (Copyright (c) 1998 Elsevier Science B.V., Amsterdam. All rights reserved.)
Brown, Richard J C; Woods, Peter T
2012-01-01
A comparison of various averaging techniques to calculate the Average Exposure Indicator (AEI) specified in European Directive 2008/50/EC for particulate matter in ambient air has been performed. This was done for data from seventeen sites around the UK for which PM(10) mass concentration data is available for the years 1998-2000 and 2008-2010 inclusive. The results have shown that use of the geometric mean produces significantly lower AEI values within the required three year averaging periods and slightly lower changes in the AEI value between the three year averaging periods than the use of the arithmetic mean. The use of weighted means in the calculation, using the data capture at each site as the weighting parameter, has also been tested and this is proposed as a useful way of taking account of the confidence of each data set.
Comparison of inverse dynamics calculated by two- and three-dimensional models during walking
DEFF Research Database (Denmark)
Alkjaer, T; Simonsen, E B; Dyhre-Poulsen, P
2001-01-01
recorded the subjects as they walked across two force plates. The subjects were invited to approach a walking speed of 4.5 km/h. The ankle, knee and hip joint moments in the sagittal plane were calculated by 2D and 3D inverse dynamics analysis and compared. Despite the uniform walking speed (4.53 km....../h) and similar footwear, relatively large inter-individual variations were found in the joint moment patterns during the stance phase. The differences between individuals were present in both the 2D and 3D analysis. For the entire sample of subjects the overall time course pattern of the ankle, knee and hip...... the magnitude of the joint moments calculated by 2D and 3D inverse dynamics but the inter-individual variation was not affected by the different models. The simpler 2D model seems therefore appropriate for human gait analysis. However, comparisons of gait data from different studies are problematic...
Comparison of Steady-State SVC Models in Load Flow Calculations
DEFF Research Database (Denmark)
Chen, Peiyuan; Chen, Zhe; Bak-Jensen, Birgitte
2008-01-01
This paper compares in a load flow calculation three existing steady-state models of static var compensator (SVC), i.e. the generator-fixed susceptance model, the total susceptance model and the firing angle model. The comparison is made in terms of the voltage at the SVC regulated bus, equivalent...... SVC susceptance at the fundamental frequency and the load flow convergence rate both when SVC is operating within and on the limits. The latter two models give inaccurate results of the equivalent SVC susceptance as compared to the generator model due to the assumption of constant voltage when the SVC...... of the calculated SVC susceptance while retaining acceptable load flow convergence rate....
Vver-1000 Mox core computational benchmark
International Nuclear Information System (INIS)
2006-01-01
The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the
Benchmarking in University Toolbox
Directory of Open Access Journals (Sweden)
Katarzyna Kuźmicz
2015-06-01
Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.
Benchmarking of refinery emissions performance : Executive summary
International Nuclear Information System (INIS)
2003-07-01
This study was undertaken to collect emissions performance data for Canadian and comparable American refineries. The objective was to examine parameters that affect refinery air emissions performance and develop methods or correlations to normalize emissions performance. Another objective was to correlate and compare the performance of Canadian refineries to comparable American refineries. For the purpose of this study, benchmarking involved the determination of levels of emission performance that are being achieved for generic groups of facilities. A total of 20 facilities were included in the benchmarking analysis, and 74 American refinery emission correlations were developed. The recommended benchmarks, and the application of those correlations for comparison between Canadian and American refinery performance, were discussed. The benchmarks were: sulfur oxides, nitrogen oxides, carbon monoxide, particulate, volatile organic compounds, ammonia and benzene. For each refinery in Canada, benchmark emissions were developed. Several factors can explain differences in Canadian and American refinery emission performance. 4 tabs., 7 figs
Furber, Gareth; Brann, Peter; Skene, Clive; Allison, Stephen
2011-06-01
The purpose of this study was to benchmark the cost efficiency of community care across six child and adolescent mental health services (CAMHS) drawn from different Australian states. Organizational, contact and outcome data from the National Mental Health Benchmarking Project (NMHBP) data-sets were used to calculate cost per "treatment hour" and cost per episode for the six participating organizations. We also explored the relationship between intake severity as measured by the Health of the Nations Outcome Scales for Children and Adolescents (HoNOSCA) and cost per episode. The average cost per treatment hour was $223, with cost differences across the six services ranging from a mean of $156 to $273 per treatment hour. The average cost per episode was $3349 (median $1577) and there were significant differences in the CAMHS organizational medians ranging from $388 to $7076 per episode. HoNOSCA scores explained at best 6% of the cost variance per episode. These large cost differences indicate that community CAMHS have the potential to make substantial gains in cost efficiency through collaborative benchmarking. Benchmarking forums need considerable financial and business expertise for detailed comparison of business models for service provision.
Comparison Report of Open Calculations for ATLAS Domestic Standard Problem (DSP-01)
International Nuclear Information System (INIS)
Choi, Ki Yong; Kim, Y. S.; Kang, K. H.
2010-06-01
KAERI (Korea Atomic Energy Research Institute) has been operating an integral effect test facility, the ATLAS (Advanced Thermal-Hydraulic Test Loop for Accident Simulation) for accident simulations of advanced pressurized water reactors (PWRs). As an integral effect test database for major design basis accidents has been accumulated, a Domestic Standard Problem (DSP) exercise using the ATLAS was proposed in order to transfer the database to domestic nuclear industries and to contribute to improving safety analysis methodology for PWRs. This ATLAS DSP exercise was led by KAERI in collaboration with KINS and it was the first-ever exercise in Korea. This exercise aims at effective utilization of integral effect database obtained from the ATLAS, establishment of cooperation framework among the domestic nuclear industry, better understanding of thermal hydraulic phenomena, and investigation of the possible limitation of the existing best-estimate safety analysis codes. As the first DSP exercise, 100% break scenario of the DVI nozzle was determined by considering its technical importance and by incorporating with comments from participants. Twelve domestic organizations joined this DSP exercise. Finally, ten among the joined organizations submitted their calculation results. They include universities, government, and nuclear industries. This first DSP exercise was performed in an open calculation environment; integral effect test data was open to participants prior to code calculations. This report includes all information of the first DSP-01 exercise as well as comparison results between the calculations and the experimental data
Energy Technology Data Exchange (ETDEWEB)
Giglmayr, I.; Pogoreutz, M. [Technische Univ., Graz (Austria). Inst. fuer Waermetechnik; Nixdorf, M. [Technische Univ. Muenchen (Germany). Inst. of Thermal Power Systems
2001-07-01
The VGB Research Project No. 177 'Comparison of Software for Thermodynamic Process Calculation' gives an overview of 16 programmes commercially available and aims at presenting an objective comparison of them. This comparison is essentially based on the method of utility value analysis. As, however, the requirements to be met by these programmes differ considerably, no generally valid ranking/recommendation can be given. Thus, the programme Quick Select is a tool for the potential user which can be applied to establish a ranking designed to meet his own specific requirements. (orig.) [German] Das VGB-Forschungsvorhaben Nr. 177 'Vergleich von Software zur thermodynamischen Prozessrechnung' gibt eine Uebersicht ueber 16 kommerziell angebotene Programme und gestattet ihre Beurteilung. Der Vergleich der Leistungsfaehigkeit stuetzt sich im Wesentlichen auf die Methode der Nutzwertanalyse, wobei aufgrund der stark divergierenden Anforderungen an die Programme keine allgemein gueltige Reihung/Empfehlung gegeben werden kann. Das im Rahmen der Projektbearbeitung entwickelte Programm Quick Select ermoeglicht jedoch dem potentiellen Anwender, ein Ranking fuer seine eigenen Ansprueche zu erstellen. (orig.)
International Nuclear Information System (INIS)
Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.; Graves, H.
1987-01-01
This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described
Benchmarking study and its application for shielding analysis of large accelerator facilities
International Nuclear Information System (INIS)
Lee, Hee-Seock; Kim, Dong-hyun; Oranj, Leila Mokhtari; Oh, Joo-Hee; Lee, Arim; Jung, Nam-Suk
2015-01-01
Shielding Analysis is one of subjects which are indispensable to construct large accelerator facility. Several methods, such as the Monte Carlo, discrete ordinate, and simplified calculation, have been used for this purpose. The calculation precision is overcome by increasing the trial (history) numbers. However its accuracy is still a big issue in the shielding analysis. To secure the accuracy in the Monte Carlo calculation, the benchmarking study using experimental data and the code comparison are adopted fundamentally. In this paper, the benchmarking result for electrons, protons, and heavy ions are presented as well as the proper application of the results is discussed. The benchmarking calculations, which are indispensable in the shielding analysis were performed for different particles: proton, heavy ion and electron. Four different multi-particle Monte Carlo codes, MCNPX, FLUKA, PHITS, and MARS, were examined for higher energy range equivalent to large accelerator facility. The degree of agreement between the experimental data including the SINBAD database and the calculated results were estimated in the terms of secondary neutron production and attenuation through the concrete and iron shields. The degree of discrepancy and the features of Monte Carlo codes were investigated and the application way of the benchmarking results are discussed in the view of safety margin and selecting the code for the shielding analysis. In most cases, the tested Monte Carlo codes give proper credible results except of a few limitation of each codes
Stoneburner, Samuel J.; Shen, Jun; Ajala, Adeayo O.; Piecuch, Piotr; Truhlar, Donald G.; Gagliardi, Laura
2017-10-01
Singlet-triplet gaps in diradical organic π-systems are of interest in many applications. In this study, we calculate them in a series of molecules, including cyclobutadiene and its derivatives and cyclopentadienyl cation, by using correlated participating orbitals within the complete active space (CAS) and restricted active space (RAS) self-consistent field frameworks, followed by second-order perturbation theory (CASPT2 and RASPT2). These calculations are evaluated by comparison with the results of doubly electron-attached (DEA) equation-of-motion (EOM) coupled-cluster (CC) calculations with up to 4-particle-2-hole (4p-2h) excitations. We find active spaces that can accurately reproduce the DEA-EOMCC(4p-2h) data while being small enough to be applicable to larger organic diradicals.
Benchmark assemblies of the Los Alamos Critical Assemblies Facility
International Nuclear Information System (INIS)
Dowdy, E.J.
1985-01-01
Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described
Benchmark assemblies of the Los Alamos critical assemblies facility
International Nuclear Information System (INIS)
Dowdy, E.J.
1985-01-01
Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described
Benchmark assemblies of the Los Alamos critical assemblies facility
International Nuclear Information System (INIS)
Dowdy, E.J.
1986-01-01
Several critical assemblies of precisely known materials composition and easily calculated and reproducible geometries have been constructed at the Los Alamos National Laboratory. Some of these machines, notably Jezebel, Flattop, Big Ten, and Godiva, have been used as benchmark assemblies for the comparison of the results of experimental measurements and computation of certain nuclear reaction parameters. These experiments are used to validate both the input nuclear data and the computational methods. The machines and the applications of these machines for integral nuclear data checks are described. (author)
Comparisons of calculated and measured parameters for a Pu-fueled LWHCR lattice
International Nuclear Information System (INIS)
Chawla, R.; Gmuer, K.; Hager, H.; Hettergott, E.; Paratte, J.M.; Seiler, R.; Stahel, D.
1982-05-01
The lack of suitable integral experiments for Pu-fueled LWHCR lattices renders it difficult to assess the adequacy of calculational methods and data being applied to the physics design of such reactors. The authors present results for the first of a series of cores being investigated in the PROTEUS reactor at Wuerenlingen to help fill the gap - a PuO 2 /UO 2 fueled test lattice with an effective Pu enrichment of 7% and a fuel/moderator ratio of 2.0. The measurements reported include core-centre reaction rate ratios involving the capture rate in 238 U and fission rates in 235 U, 238 U, 239 Pu and 241 Pu. Infinite-dilution reaction rates in the lattice were determined for the 10 B(n,α) reaction and for 232 Th, 233 U fission. Also measured were various radial and axial reaction rate traverses across the test zone to provide material-buckling, and hence ksub(infinity), related data. Comparisons of the measured parameters have been made with calculations based on the use of several different lattice codes and their associated cross-section libraries, viz. WIMS-D, EPRI-CPM and BOXER. For several of the calculated parameters, a strong dependence has been observed on the available data sets. These are discussed in the light of the experimental results, as also the unusual spectrum characteristics of the test lattice. (Auth.)
Qiu, Rui; Li, Junli; Zhang, Zhan; Wu, Zhen; Zeng, Zhi; Fan, Jiajin
2008-12-01
The Chinese mathematical phantom (CMP) is a stylized human body model developed based on the methods of Oak Ridge National Laboratory (ORNL) mathematical phantom series (OMPS), and data from Reference Asian Man and Chinese Reference Man. It is constructed for radiation dose estimation for Mongolians, whose anatomical parameters are different from those of Caucasians to some extent. Specific absorbed fractions (SAF) are useful quantities for the primary estimation of internal radiation dose. In this paper, a general Monte Carlo code, Monte Carlo N-Particle Code (MCNP) is used to transport particles and calculate SAF. A new variance reduction technique, called the "pointing probability with force collision" method, is implemented into MCNP to reduce the calculation uncertainty, especially for a small-volume target organ. Finally, SAF data for all 31 organs of both sexes of CMP are calculated. A comparison between SAF based on male phantoms of CMP and OMPS demonstrates that the differences apparently exist, and more than 80% of SAF data based on CMP are larger than that of OMPS. However, the differences are acceptable (the differences are above one order of magnitude only in less than 3% of situations) considering the differences in physique. Furthermore, trends in the SAF with increasing photon energy based on the two phantoms agree well. This model complements existing phantoms of different age, sex and ethnicity.
International Nuclear Information System (INIS)
Daures, J.; Gouriou, J.; Bordy, J.M.
2010-01-01
The authors report calculations performed using the MNCP and PENELOPE codes to determine the Hp(3)/K air conversion coefficient which allows the Hp(3) dose equivalent to be determined from the measured value of the kerma in the air. They report the definition of the phantom, a 20 cm diameter and 20 cm high cylinder which is considered as representative of a head. Calculations are performed for an energy range corresponding to interventional radiology or cardiology (20 keV-110 keV). Results obtained with both codes are compared
GEN-IV Benchmarking of Triso Fuel Performance Models under accident conditions modeling input data
Energy Technology Data Exchange (ETDEWEB)
Collin, Blaise Paul [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2016-09-01
This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. • The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read
Benchmarking NNWSI flow and transport codes: COVE 1 results
International Nuclear Information System (INIS)
Hayden, N.K.
1985-06-01
The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of the codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs
Pool critical assembly pressure vessel facility benchmark
International Nuclear Information System (INIS)
Remec, I.; Kam, F.B.K.
1997-07-01
This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used
Spectral calculations of WWER-440 fuel assemblies with Gd burnable absorbers
International Nuclear Information System (INIS)
Mikolas, P.
2000-01-01
Paper describes some analyses of WWER-440 FA with Gd burnable absorber (Gd 2 O 3 ). Analyses consist in comparison of FA with and without Gd absorber incl. variants with different Gd absorbers and a methodology of fuel assemblies burnup calculation with the aim of finding of an optimal variant of such a fuel assemblies and a methodology of its calculation. A benchmark burnup comparison of FA with Gd as a continuation of benchmark for FA burnup comparison from 1966 is also proposed. (Authors)
Kramer, Andrew A; Higgins, Thomas L; Zimmerman, Jack E
2014-03-01
Physiology and Chronic Health Evaluation IVa had better accuracy within patient subgroups and for specific admission diagnoses. Acute Physiology and Chronic Health Evaluation IVa offered the best discrimination and calibration on a large common dataset and excluded fewer patients than Mortality Probability Admission Model III or ICU Outcomes Model/National Quality Forum. The choice of ICU performance benchmarks should be based on a comparison of model accuracy using data for identical patients.
International Nuclear Information System (INIS)
Mackillop, William J.; Kong, Weidong; Brundage, Michael; Hanna, Timothy P.; Zhang-Salomons, Jina; McLaughlin, Pierre-Yves; Tyldesley, Scott
2015-01-01
Purpose: Estimates of the appropriate rate of use of radiation therapy (RT) are required for planning and monitoring access to RT. Our objective was to compare estimates of the appropriate rate of use of RT derived from mathematical models, with the rate observed in a population of patients with optimal access to RT. Methods and Materials: The rate of use of RT within 1 year of diagnosis (RT 1Y ) was measured in the 134,541 cases diagnosed in Ontario between November 2009 and October 2011. The lifetime rate of use of RT (RT LIFETIME ) was estimated by the multicohort utilization table method. Poisson regression was used to evaluate potential barriers to access to RT and to identify a benchmark subpopulation with unimpeded access to RT. Rates of use of RT were measured in the benchmark subpopulation and compared with published evidence-based estimates of the appropriate rates. Results: The benchmark rate for RT 1Y , observed under conditions of optimal access, was 33.6% (95% confidence interval [CI], 33.0%-34.1%), and the benchmark for RT LIFETIME was 41.5% (95% CI, 41.2%-42.0%). Benchmarks for RT LIFETIME for 4 of 5 selected sites and for all cancers combined were significantly lower than the corresponding evidence-based estimates. Australian and Canadian evidence-based estimates of RT LIFETIME for 5 selected sites differed widely. RT LIFETIME in the overall population of Ontario was just 7.9% short of the benchmark but 20.9% short of the Australian evidence-based estimate of the appropriate rate. Conclusions: Evidence-based estimates of the appropriate lifetime rate of use of RT may overestimate the need for RT in Ontario
Mamo, Dereje; Hazel, Elizabeth; Lemma, Israel; Guenther, Tanya; Bekele, Abeba; Demeke, Berhanu
2014-10-01
Program managers require feasible, timely, reliable, and valid measures of iCCM implementation to identify problems and assess progress. The global iCCM Task Force developed benchmark indicators to guide implementers to develop or improve monitoring and evaluation (M&E) systems. To assesses Ethiopia's iCCM M&E system by determining the availability and feasibility of the iCCM benchmark indicators. We conducted a desk review of iCCM policy documents, monitoring tools, survey reports, and other rele- vant documents; and key informant interviews with government and implementing partners involved in iCCM scale-up and M&E. Currently, Ethiopia collects data to inform most (70% [33/47]) iCCM benchmark indicators, and modest extra effort could boost this to 83% (39/47). Eight (17%) are not available given the current system. Most benchmark indicators that track coordination and policy, human resources, service delivery and referral, supervision, and quality assurance are available through the routine monitoring systems or periodic surveys. Indicators for supply chain management are less available due to limited consumption data and a weak link with treatment data. Little information is available on iCCM costs. Benchmark indicators can detail the status of iCCM implementation; however, some indicators may not fit country priorities, and others may be difficult to collect. The government of Ethiopia and partners should review and prioritize the benchmark indicators to determine which should be included in the routine M&E system, especially since iCCMdata are being reviewed for addition to the HMIS. Moreover, the Health Extension Worker's reporting burden can be minimized by an integrated reporting approach.
Comparison report of open calculations for ATLAS Domestic Standard Problem (DSP 02)
International Nuclear Information System (INIS)
Choi, Ki Yong; Kim, Y. S.; Kang, K. H.; Cho, S.; Park, H. S.; Choi, N. H.; Kim, B. D.; Min, K. H.; Park, J. K.; Chun, H. G.; Yu, Xin Guo; Kim, H. T.; Song, C. H.; Sim, S. K.; Jeon, S. S.; Kim, S. Y.; Kang, D. G.; Choi, T. S.; Kim, Y. M.; Lim, S. G.; Kim, H. S.; Kang, D. H.; Lee, G. H.; Jang, M. J.
2012-09-01
KAERI (Korea Atomic Energy Research Institute) has been operating an integral effect test facility, the Advanced Thermal Hydraulic Test Loop for Accident Simulation (ATLAS) for transient and accident simulations of advanced pressurized water reactors (PWRs). By using the ATLAS, a high quality integral effect test database has been established for major design basis accidents of the APR1400. A Domestic Standard Problem (DSP) exercise using the ATLAS database was promoted in order to transfer the database to domestic nuclear industries and to contribute to improving safety analysis methodology for PWRs. This 2nd ATLAS DSP exercise was led by KAERI in collaboration with KINS since the successful completion of the 1st ATLAS DSP in 2009. This exercise aims at effective utilization of integral effect database obtained from the ATLAS, establishment of cooperation framework among the domestic nuclear industry, better understanding of thermal hydraulic phenomena, and investigation of the possible limitation of the existing best estimate safety analysis codes. A small break loss of coolant accident of 6 inch break at the cold leg was determined as a target scenario by considering its technical importance and by incorporating with interests from participants. Twelve domestic organizations joined this DSP 02 exercise. Finally, eleven out of the joined organizations submitted their calculation results, including universities, government, and nuclear industries. This DSP exercise was performed in an open calculation environment where the integral effect test data was open to participants prior to code calculations. This report includes all information of the 2nd ATLAS DSP (DSP 02) exercise as well as comparison results between the calculations and the experimental data
International Nuclear Information System (INIS)
Brisbois, J.; Vergnaud, T.; Oceraies, Y.
1967-12-01
In a graphite pile, EDF or Inca type reactor, it is necessary to know the value of the intermediate neutron flux at the output of the lateral reflector in order to determine more precisely the neutron flux at the level of ionisation chambers. A sub critical pile of graphite and natural uranium was built, allowing to reconstitute the geometry of the radiation sources and the disposition of inferior and lateral protections of these piles. This pile is supplied with thermal neutrons coming from the Nereide light water type reactor. Some measurements of intermediate neutron flux have been made in this pile in order to establish a formalism for neutron flux calculation in slowing down in a whole core-lateral reflector, from the distribution of the thermal neutrons flux in the core. The flux calculation is done by age theory in three dimensions, in two homogenous media, separated by an axially semi infinite and laterally finite plane. One of these media includes a distribution of source. The constants are modified in order to take into account the presence of empty channels in the stacking. These calculations are done by the Malaga code. The checking of the formalism has been made in a greater complex geometry of these reactors that introduces an uncertainty factor in the comparison of results. We can however tell that we estimate correctly the variation of the intermediate neutrons flux in the core as well as its descending in a holed lateral reflector. The ratio between the calculation and the experiment is inferior to 2 or 3. Most of the time to a factor 2 [fr
DEFF Research Database (Denmark)
Seabrooke, Leonard; Wigan, Duncan
2015-01-01
Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as impo......Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...
Farnum, John D; Bowman, Joel M
2007-10-18
We investigate the dependence of the branching ratio of formaldehyde dissociation to molecular and radical products on the total energy and angular momentum and the HCO rotational state distributions by using a combination of transition state/Rice-Ramsperger-Kassel-Marcus theory and phase space theory. Comparisons are made with recent quasiclassical trajectory (QCT) calculations [Farnum, J. D.; Zhang, X.; Bowman, J. M. J. Chem. Phys. 2007, 126, 134305]. The combined phase-space analysis is in semiquantitative agreement with the QCT results for the rotational distributions of HCO but is only in qualitative agreement for the branching ratio. Nevertheless, that level of agreement serves to provide insight into the QCT results, which showed suppression of the radical channel with increasing total angular momentum for a fixed total energy.
International Nuclear Information System (INIS)
Egbert, Stephen D.; Cullings, Harry M.
2005-01-01
An important part of validating the DS02 dosimetry system is the comparison of calculated initial neutron and gamma-ray radiation activation from the atomic bombs with all measurements that have been made, both before and during this current dosimetry reevaluation. All measurements that were made before the year 2002 are listed in Table 5 of Chapter 4. Many of these measurements have been compared to previous versions of the dosimetry systems for Hiroshima and Nagasaki. In this section the measurements are compared to the new dosimetry system DS02. For the purposes of showing historical context, they are also compared to the previous dosimetry system DS86. References for these measurements are found in Chapter 4. (J.P.N.)
Comparison of burnup calculation results using several evaluated nuclear data files
International Nuclear Information System (INIS)
Suyama, Kenya; Katakura, Jun-ichi; Nomura, Yasushi
2002-01-01
Burn-up calculation and comparison of the results were carried out to clarify the differences among the following latest evaluated nuclear data libraries: JENDL-3.2, ENDF/B-VI and JEF-2.2. The analyses showed that the differences seen among the current evaluated nuclear data libraries are small for evaluation of the amounts of many uranium and plutonium isotopes. However, several nuclides important for evaluation of nuclear fuel cycle as 238 Pu, 244 Cm, 149 Sm and 134 Cs showed large differences among used libraries. The chain analyses for the isotopes were conducted and the reasons for the differences were discussed. Based on the discussion, information of important cross section to obtain better agreement with the experimental results for 238 Pu, 244 Cm, 149 Sm and 134 Cs was shown. (author)
International Nuclear Information System (INIS)
Johnson, J.O.; Miller, L.F.; Kam, F.B.K.
1981-05-01
A neutronics model for the Oak Ridge National Laboratory Bulk Shielding Reactor (ORNL-SAR) was developed and verified by experimental measurements. A cross-section library was generated from the 218 group Master Library using the AMPX Block Code system. A series of one-, two-, and three-dimensional neutronics calculations were performed utilizing both transport and diffusion theory. Spectral comparison was made with 58 Ni(n,p) reaction. The results of the comparison between the calculational model and other experimental measurements showed agreement within 10% and therefore the model was determined to be adequate for calculating the neutron fluence for future irradiation experiments in the ORNL-BSR
Energy Technology Data Exchange (ETDEWEB)
Suter, G.W. II [Oak Ridge National Lab., TN (United States); Tsao, C.L. [Duke Univ., Durham, NC (United States). School of the Environment
1996-06-01
This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more complete documentation of the sources and derivation of all values are presented.
Comparisons of calculated and measured parameters for a Pu-fueled LWHCR lattice
International Nuclear Information System (INIS)
Chawla, R.; Gmuer, K.; Hager, H.; Hettergott, E.; Paratte, J.M.; Seiler, R.; Stahel, D.
1982-01-01
The lack of suitable integral experiments for Pu-fueled LWHCR lattices renders it difficult to assess the adequacy of calculational methods and data being applied to the physics design of such reactors. The current paper presents results for the first of a series of Cores being investigated in the PROTEUS reactor at Wuerenlingen to help fill the gap - a PuO 2 /UO 2 fueled test lattice with an effective Pu enrichment of 7% and a fuel/moderator ratio of 2.0. The measurements reported include core-centre reaction rate ratios involving the capture rate in 238 U and fission rates in 235 U, 238 U, 239 Pu and 241 Pu. Infinite-dilution reaction rates in the lattice were determined for the 10 B(n,α) reaction and for 232 Th, 233 U fission. Also measured were various radial and axial reaction rate traverses across the test zone to provide material-buckling, and hence k/sub infinity/, related data. Comparisons of the measured parameters have been made with calculations based on the use of several different lattice codes and their associated cross-section libraries, viz. WIMS-D, EPRI-CPM and BOXER
International Nuclear Information System (INIS)
Vrigneaud, J.M.; Carlier, T.
2006-01-01
Comparison of the two spreadsheets did not show any significant differences provided that proper biological models were used to follow 131 iodine clearance. This means that even simple assumptions can be used to give reasonable radiation safety recommendations. Nevertheless, a complete understanding of the formalism is required to use correctly these spreadsheets. Initial parameters must be chosen carefully and validation of the computed results must be done. Published guidelines are found to be in accordance with those issued from these spreadsheets. Furthermore, both programs make it possible to collect biological data from each patient and use it as input to calculate individual tailored radiation safety advices. Also, measured exposure rate may be entered into the spreadsheets to calculate patient-specific close contact delays required to reduce the dose to specified limits. These spreadsheets may be used to compute restriction times for any given radiopharmaceutical, provided that input parameters are chosen correctly. They can be of great help to physicians to provide patients with guidance on how to maintain doses to other individuals as low as reasonably achievable. (authors)
Sample-size calculations for multi-group comparison in population pharmacokinetic experiments.
Ogungbenro, Kayode; Aarons, Leon
2010-01-01
This paper describes an approach for calculating sample size for population pharmacokinetic experiments that involve hypothesis testing based on multi-group comparison detecting the difference in parameters between groups under mixed-effects modelling. This approach extends what has been described for generalized linear models and nonlinear population pharmacokinetic models that involve only binary covariates to more complex nonlinear population pharmacokinetic models. The structural nonlinear model is linearized around the random effects to obtain the marginal model and the hypothesis testing involving model parameters is based on Wald's test. This approach provides an efficient and fast method for calculating sample size for hypothesis testing in population pharmacokinetic models. The approach can also handle different design problems such as unequal allocation of subjects to groups and unbalanced sampling times between and within groups. The results obtained following application to a one compartment intravenous bolus dose model that involved three different hypotheses under different scenarios showed good agreement between the power obtained from NONMEM simulations and nominal power. Copyright © 2009 John Wiley & Sons, Ltd.
Benchmarking biofuels; Biobrandstoffen benchmarken
Energy Technology Data Exchange (ETDEWEB)
Croezen, H.; Kampman, B.; Bergsma, G.
2012-03-15
A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.
DEFF Research Database (Denmark)
Bogetoft, Peter; Nielsen, Kurt
2005-01-01
We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...
Benchmark risk analysis models
Ale BJM; Golbach GAM; Goos D; Ham K; Janssen LAM; Shield SR; LSO
2002-01-01
A so-called benchmark exercise was initiated in which the results of five sets of tools available in the Netherlands would be compared. In the benchmark exercise a quantified risk analysis was performed on a -hypothetical- non-existing hazardous establishment located on a randomly chosen location in
P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel
1998-01-01
textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It
Bogetoft, Peter; Nielsen, Kurt
2002-01-01
We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as non-parametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore alternative improvement strategies. An implementation of both a parametric and a non parametric model are presented.
MCNP benchmark analyses of critical experiments for the Space Nuclear Thermal Propulsion program
Selcow, Elizabeth C.; Cerbone, Ralph J.; Ludewig, Hans; Mughabghab, Said F.; Schmidt, Eldon; Todosow, Michael; Parma, Edward J.; Ball, Russell M.; Hoovler, Gary S.
1993-01-01
Benchmark analyses have been performed of Particle Bed Reactor (PBR) critical experiments (CX) using the MCNP radiation transport code. The experiments have been conducted at the Sandia National Laboratory reactor facility in support of the Space Nuclear Thermal Propulsion (SNTP) program. The test reactor is a nineteen element water moderated and reflected thermal system. A series of integral experiments have been carried out to test the capabilities of the radiation transport codes to predict the performance of PBR systems. MCNP was selected as the preferred radiation analysis tool for the benchmark experiments. Comparison between experimental and calculational results indicate close agreement. This paper describes the analyses of benchmark experiments designed to quantify the accuracy of the MCNP radiation transport code for predicting the performance characteristics of PBR reactors.
MCNP benchmark analyses of critical experiments for the Space Nuclear Thermal Propulsion program
International Nuclear Information System (INIS)
Selcow, E.C.; Cerbone, R.J.; Ludewig, H.; Mughabghab, S.F.; Schmidt, E.; Todosow, M.; Parma, E.J.; Ball, R.M.; Hoovler, G.S.
1993-01-01
Benchmark analyses have been performed of Particle Bed Reactor (PBR) critical experiments (CX) using the MCNP radiation transport code. The experiments have been conducted at the Sandia National Laboratory reactor facility in support of the Space Nuclear Thermal Propulsion (SNTP) program. The test reactor is a nineteen element water moderated and reflected thermal system. A series of integral experiments have been carried out to test the capabilities of the radiation transport codes to predict the performance of PBR systems. MCNP was selected as the preferred radiation analysis tool for the benchmark experiments. Comparison between experimental and calculational results indicate close agreement. This paper describes the analyses of benchmark experiments designed to quantify the accuracy of the MCNP radiation transport code for predicting the performance characteristics of PBR reactors
Resolution for the Loviisa benchmark problem
International Nuclear Information System (INIS)
Garcia, C.R.; Quintero, R.; Milian, D.
1992-01-01
In the present paper, the Loviisa benchmark problem for cycles 11 and 8, and reactor blocks 1 and 2 from Loviisa NPP, is calculated. This problem user law leakage reload patterns and was posed at the second thematic group of TIC meeting held in Rheinsberg GDR, march 1989. SPPS-1 coarse mesh code has been used for the calculations
Benchmark experiment on vanadium assembly with D-T neutrons. Leakage neutron spectrum measurement
Energy Technology Data Exchange (ETDEWEB)
Kokooo; Murata, I.; Nakano, D.; Takahashi, A. [Osaka Univ., Suita (Japan); Maekawa, F.; Ikeda, Y.
1998-03-01
The fusion neutronics benchmark experiments have been done for vanadium and vanadium alloy by using the slab assembly and time-of-flight (TOF) method. The leakage neutron spectra were measured from 50 keV to 15 MeV and comparison were done with MCNP-4A calculations which was made by using evaluated nuclear data of JENDL-3.2, JENDL-Fusion File and FENDL/E-1.0. (author)
Verification and validation benchmarks.
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, William Louis; Trucano, Timothy Guy
2007-02-01
Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of
Verification and validation benchmarks
International Nuclear Information System (INIS)
Oberkampf, William L.; Trucano, Timothy G.
2008-01-01
Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the
Sogukpinar, Haci; Bozkurt, Ismail
2018-02-01
Aerodynamic performance of the airfoil plays the most important role to obtain economically maximum efficiency from a wind turbine. Therefore airfoil should have an ideal aerodynamic shape. In this study, aerodynamic simulation of S809 airfoil is conducted and obtained result compared with previously made NASA experimental result and NREL theoretical data. At first, Lift coefficient, lift to drag ratio and pressure coefficient around S809 airfoil are calculated with SST turbulence model, and are compared with experimental and other theoretical data to correlate simulation correctness of the computational approaches. And result indicates good correlation with both experimental and theoretical data. This calculation point out that as the increasing relative velocity, lift to drag ratio increases. Lift to drag ratio attain maximum at the angle around 6 degree and after that starts to decrease again. Comparison shows that CFD code used in this calculation can predict aerodynamic properties of airfoil.
Energy saving in WWTP: Daily benchmarking under uncertainty and data availability limitations.
Torregrossa, D; Schutz, G; Cornelissen, A; Hernández-Sancho, F; Hansen, J
2016-07-01
Efficient management of Waste Water Treatment Plants (WWTPs) can produce significant environmental and economic benefits. Energy benchmarking can be used to compare WWTPs, identify targets and use these to improve their performance. Different authors have performed benchmark analysis on monthly or yearly basis but their approaches suffer from a time lag between an event, its detection, interpretation and potential actions. The availability of on-line measurement data on many WWTPs should theoretically enable the decrease of the management response time by daily benchmarking. Unfortunately this approach is often impossible because of limited data availability. This paper proposes a methodology to perform a daily benchmark analysis under database limitations. The methodology has been applied to the Energy Online System (EOS) developed in the framework of the project "INNERS" (INNovative Energy Recovery Strategies in the urban water cycle). EOS calculates a set of Key Performance Indicators (KPIs) for the evaluation of energy and process performances. In EOS, the energy KPIs take in consideration the pollutant load in order to enable the comparison between different plants. For example, EOS does not analyse the energy consumption but the energy consumption on pollutant load. This approach enables the comparison of performances for plants with different loads or for a single plant under different load conditions. The energy consumption is measured by on-line sensors, while the pollutant load is measured in the laboratory approximately every 14 days. Consequently, the unavailability of the water quality parameters is the limiting factor in calculating energy KPIs. In this paper, in order to overcome this limitation, the authors have developed a methodology to estimate the required parameters and manage the uncertainty in the estimation. By coupling the parameter estimation with an interval based benchmark approach, the authors propose an effective, fast and reproducible
Experimental and computational benchmark tests
International Nuclear Information System (INIS)
Gilliam, D.M.; Briesmeister, J.F.
1994-01-01
A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252 Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235 U, 239 Pu, 238 U, and 237 Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel
Workshop: Monte Carlo computational performance benchmark - Contributions
International Nuclear Information System (INIS)
Hoogenboom, J.E.; Petrovic, B.; Martin, W.R.; Sutton, T.; Leppaenen, J.; Forget, B.; Romano, P.; Siegel, A.; Hoogenboom, E.; Wang, K.; Li, Z.; She, D.; Liang, J.; Xu, Q.; Qiu, Y.; Yu, J.; Sun, J.; Fan, X.; Yu, G.; Bernard, F.; Cochet, B.; Jinaphanh, A.; Jacquet, O.; Van der Marck, S.; Tramm, J.; Felker, K.; Smith, K.; Horelik, N.; Capellan, N.; Herman, B.
2013-01-01
This series of slides is divided into 3 parts. The first part is dedicated to the presentation of the Monte-Carlo computational performance benchmark (aims, specifications and results). This benchmark aims at performing a full-size Monte Carlo simulation of a PWR core with axial and pin-power distribution. Many different Monte Carlo codes have been used and their results have been compared in terms of computed values and processing speeds. It appears that local power values mostly agree quite well. The first part also includes the presentations of about 10 participants in which they detail their calculations. In the second part, an extension of the benchmark is proposed in order to simulate a more realistic reactor core (for instance non-uniform temperature) and to assess feedback coefficients due to change of some parameters. The third part deals with another benchmark, the BEAVRS benchmark (Benchmark for Evaluation And Validation of Reactor Simulations). BEAVRS is also a full-core PWR benchmark for Monte Carlo simulations
Developing integrated benchmarks for DOE performance measurement
Energy Technology Data Exchange (ETDEWEB)
Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.
1992-09-30
The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.
Energy Technology Data Exchange (ETDEWEB)
Hannstein, Volker; Sommer, Fabian
2017-05-15
The report summarizes the performed studies and results in the frame of the phase II benchmarks of the expert group of used nuclear fuel (EGUNF) of the working party of nuclear criticality safety (WPNCS) of the nuclear energy agency (NEA) of the organization for economic co-operation and development (OECD). The studies specified within the benchmarks have been realized to the full extent. The scope of the benchmarks was the comparison of a generic BWR fuel element with gadolinium containing fuel rods with several computer codes and cross section libraries of different international working groups and institutions. The used computational model allows the evaluation of the accuracy of fuel rod and their influence of the inventory calculations and the respective influence on BWR burnout credit calculations.
International Nuclear Information System (INIS)
Esnaashari, K. N.; Allahverdi, M.; Gharaati, H.; Shahriari, M.
2007-01-01
Stereotactic radiosurgery is an important clinical tool for the treatment of small lesions in the brain, including benign conditions, malignant and localized metastatic tumors. A dosimetry study was performed for Elekta 'Synergy S' as a dedicated Stereotactic radiosurgery unit, capable of generating circular radiation fields with diameters of 1-5 cm at iso centre using the BEAM/EGS4 Monte Carlo code. Materials and Methods: The linear accelerator Elekta Synergy S equipped with a set of 5 circular collimators from 10 mm to 50 mm in diameter at iso centre distance was used. The cones were inserted in a base plate mounted on the collimator linac head. A PinPoint chamber and Wellhofer water tank chamber were selected for clinical dosimetry of 6 MV photon beams. The results of simulations using the Monte Carlo system BEAM/EGS4 to model the beam geometry were compared with dose measurements. Results: An excellent agreement was found between Monte Carlo calculated and measured percentage depth dose and lateral dose profiles which were performed in water phantom for circular cones with 1, 2, 3, 4 and 5 cm in diameter. The comparison between calculation and measurements showed up to 0.5 % or 1 m m difference for all field sizes. The penumbra (80-20%) results at 5 cm depth in water phantom and SSD=95 ranged from 1.5 to 2.1 mm for circular collimators with diameter 1 to 5 cm. Conclusion: This study showed that BEAMnrc code has been accurate in modeling Synergy S linear accelerator equipped with circular collimators
Meijer, Gert J; van den Berg, Hetty A; Hurkmans, Coen W; Stijns, Pascal E; Weterings, Jan H
2006-09-01
To compare the dosimetrical results of an interactive planning procedure and a procedure based on dynamic dose calculation for permanent prostate brachytherapy. Between 6/2000 and 11/2005, 510 patients underwent (125)I implants for T1-T2 prostate cancer. Before 4/2003, 187 patients were treated using an interactive technique that included needle updating. After that period, 323 patients were treated with a more refined dynamic technique that included constant updating of the deposited seed position. The comparison is based on postimplant dose - volume parameters such as the V(100) and d(90) for the target, V(100)(r) for the rectum and d(10)(u) for the urethra. Furthermore, the target volume ratios (TVR identical with V(100)(body)/V(100)), and the homogeneity indices (HI identical with [V(100)-V(150)]/V(100)) were calculated as additional quality parameters. The dose outside the target volume was significantly reduced, the V(100)(r) decreased from 1.4 cm(3) for the interactive technique to 0.6 cm(3) for the dynamic technique. Similarly the mean TVR reduced from 1.66 to 1.44. In addition, the mean V(100) increased from 92% for the interactive procedure to 95% for the dynamic procedure. More importantly, the percentage of patients with a V(100) < 80% reduced from 5% to 1%. A slight decline was observed with regard to the d(10)(u) (136% vs. 140%) and the HI (0.58 vs. 0.51). The dynamic implant procedure resulted in improved implants. Almost ideal dose coverage was achieved, while minimizing the dose outside the prostate.
Dosimetric comparison of interactive planned and dynamic dose calculated prostate seed brachytherapy
International Nuclear Information System (INIS)
Meijer, Gert J.; Berg, Hetty A. van den; Hurkmans, Coen W.; Stijns, Pascal E.; Weterings, Jan H.
2006-01-01
Purpose: To compare the dosimetrical results of an interactive planning procedure and a procedure based on dynamic dose calculation for permanent prostate brachytherapy. Materials and methods: Between 6/2000 and 11/2005, 510 patients underwent 125 I implants for T1-T2 prostate cancer. Before 4/2003, 187 patients were treated using an interactive technique that included needle updating. After that period, 323 patients were treated with a more refined dynamic technique that included constant updating of the deposited seed position. The comparison is based on postimplant dose-volume parameters such as the V 100 and d 90 for the target, V 100 r for the rectum and d 10 u for the urethra. Furthermore, the target volume ratios (TVR=V 100 body /V 100 ), and the homogeneity indices (HI=[V 100 -V 150 ]/V 100 ) were calculated as additional quality parameters. Results: The dose outside the target volume was significantly reduced, the V 100 r decreased from 1.4cm 3 for the interactive technique to 0.6cm 3 for the dynamic technique. Similarly the mean TVR reduced from 1.66 to 1.44. In addition, the mean V 100 increased from 92% for the interactive procedure to 95% for the dynamic procedure. More importantly, the percentage of patients with a V 100 10 u (136% vs. 140%) and the HI (0.58 vs. 0.51). Conclusion: The dynamic implant procedure resulted in improved implants. Almost ideal dose coverage was achieved, while minimizing the dose outside the prostate
Toxicological Benchmarks for Wildlife
Energy Technology Data Exchange (ETDEWEB)
Sample, B.E. Opresko, D.M. Suter, G.W.
1993-01-01
Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red
International Nuclear Information System (INIS)
Dos Santos, Adimir; Siqueira, Paulo de Tarso D.; Andrade e Silva, Graciete Simões; Grant, Carlos; Tarazaga, Ariel E.; Barberis, Claudia
2013-01-01
In year 2008 the Atomic Energy National Commission (CNEA) of Argentina, and the Brazilian Institute of Energetic and Nuclear Research (IPEN), under the frame of Nuclear Energy Argentine Brazilian Agreement (COBEN), among many others, included the project “Validation and Verification of Calculation Methods used for Research and Experimental Reactors . At this time, it was established that the validation was to be performed with models implemented in the deterministic codes HUEMUL and PUMA (cell and reactor codes) developed by CNEA and those ones implemented in MCNP by CNEA and IPEN. The necessary data for these validations would correspond to theoretical-experimental reference cases in the research reactor IPEN/MB-01 located in São Paulo, Brazil. The staff of the group Reactor and Nuclear Power Studies (SERC) of CNEA, from the argentine side, performed calculations with deterministic models (HUEMUL-PUMA) and probabilistic methods (MCNP) modeling a great number of physical situations of de reactor, which previously have been studied and modeled by members of the Center of Nuclear Engineering of the IPEN, whose results were extensively provided to CNEA. In this paper results of comparison of calculated and experimental results for temperature coefficients, kinetic parameters and fission rates spatial distributions are shown. (author)
Irwanto, Dwi; Setiadipura, Topan; Pramutadi, Asril
2017-07-01
There are two type of High Temperature Gas Reactor (HTGR), prismatic and pebble bed. Pebble Bed type has unique configuration because the fuels are randomly distributed inside the reactor core. In term of safety features, Pebble Bed Reactor (PBR) is one of the most promising reactor type in avoiding severe nuclear accidents. In order to analyze heat transfer and safety of this reactor type, a computer code is now under development. As a first step, calculation method proposed by Stroh [1] is adopted. An approach has been made to treat randomly distributed pebble balls contains fissile material inside the reactor core as a porous medium. Helium gas act as coolant on the reactor system are carrying heat flowing in the area between the pebble balls. Several parameters and constants are taken into account in the new developed code. Progress of the development of the code especially comparison of several thermal conductivity constants for a certain PBR-case are reported in the present study.
Kiehl, J. T.; Briegleb, B. P.
1992-01-01
The clear sky greenhouse effect is defined in terms of the outgoing longwave clear sky flux at the top of the atmosphere. Recently, interest in the magnitude of the clear sky greenhouse effect has increased due to the archiving of the clear sky flux quantity through the Earth Radiation Budget Experiment (ERBE). The present study investigates to what degree of accuracy this flux can be analyzed by using independent atmospheric and surface data in conjunction with a detailed longwave radiation model. The conclusion from this comparison is that for most regions over oceans the analyzed fluxes agree to within the accuracy of the ERBE-retrieved fluxes (+/- 5 W/sq m). However, in regions where deep convective activity occurs, the ERBE fluxes are significantly higher (10-15 W/sq m) than the calculated fluxes. This bias can arise from either cloud contamination problems or variability in water vapor amount. It is argued that the use of analyzed fluxes may provide a more consistent clear sky flux data set for general circulation modeling validation. Climate implications from the analyzed fluxes are explored. Finally, results for obtaining longwave surface fluxes over the oceans are presented.
Comparison of force fields and calculation methods for vibration intervals of isotopic H+3 molecules
International Nuclear Information System (INIS)
Carney, G.D.; Adler-Golden, S.M.; Lesseski, D.C.
1986-01-01
This paper reports (a) improved values for low-lying vibration intervals of H + 3 , H 2 D + , D 2 H + , and D + 3 calculated using the variational method and Simons--Parr--Finlan representations of the Carney--Porter and Dykstra--Swope ab initio H + 3 potential energy surfaces, (b) quartic normal coordinate force fields for isotopic H + 3 molecules, (c) comparisons of variational and second-order perturbation theory, and (d) convergence properties of the Lai--Hagstrom internal coordinate vibrational Hamiltonian. Standard deviations between experimental and ab initio fundamental vibration intervals of H + 3 , H 2 D + , D 2 H + , and D + 3 for these potential surfaces are 6.9 (Carney--Porter) and 1.2 cm -1 (Dykstra--Swope). The standard deviations between perturbation theory and exact variational fundamentals are 5 and 10 cm -1 for the respective surfaces. The internal coordinate Hamiltonian is found to be less efficient than the previously employed ''t'' coordinate Hamiltonian for these molecules, except in the case of H 2 D +
Financial Integrity Benchmarks
City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....
Energy Technology Data Exchange (ETDEWEB)
Xu, H; Guerrero, M; Chen, S; Langen, K; Prado, K [University of Maryland School of Medicine, Baltimore, MD (United States); Yang, X [Medstar RadAmerica, Baltimore, MD (United States); Schinkel, C [Tom Baker Cancer Centre, Calgary, AB (Canada)
2015-06-15
Purpose: The TG-71 report was published in 2014 to present standardized methodologies for MU calculations and determination of dosimetric quantities. This work explores the clinical implementation of a TG71-based electron MU calculation algorithm and compares it with a recently released commercial secondary calculation program–Mobius3D (Mobius Medical System, LP). Methods: TG-71 electron dosimetry data were acquired, and MU calculations were performed based on the recently published TG-71 report. The formalism in the report for extended SSD using air-gap corrections was used. The dosimetric quantities, such PDD, output factor, and f-air factors were incorporated into an organized databook that facilitates data access and subsequent computation. The Mobius3D program utilizes a pencil beam redefinition algorithm. To verify the accuracy of calculations, five customized rectangular cutouts of different sizes–6×12, 4×12, 6×8, 4×8, 3×6 cm{sup 2}–were made. Calculations were compared to each other and to point dose measurements for electron beams of energy 6, 9, 12, 16, 20 MeV. Each calculation / measurement point was at the depth of maximum dose for each cutout in a 10×10 cm{sup 2} or 15×15cm{sup 2} applicator with SSDs 100cm and 110cm. Validation measurements were made with a CC04 ion chamber in a solid water phantom for electron beams of energy 9 and 16 MeV. Results: Differences between the TG-71 and the commercial system relative to measurements were within 3% for most combinations of electron energy, cutout size, and SSD. A 5.6% difference between the two calculation methods was found only for the 6MeV electron beam with 3×6 cm{sup 2}cutout in the 10×10{sup 2}cm applicator at 110cm SSD. Both the TG-71 and the commercial calculations show good consistency with chamber measurements: for 5 cutouts, <1% difference for 100cm SSD, and 0.5–2.7% for 110cm SSD. Conclusions: Based on comparisons with measurements, a TG71-based computation method and a Mobius3D
Han, Rui; Lu, Xiaoyi
2014-01-01
Big data systems address the challenges of capturing, storing, managing, analyzing, and visualizing big data. Within this context, developing benchmarks to evaluate and compare big data systems has become an active topic for both research and industry communities. To date, most of the state-of-the-art big data benchmarks are designed for specific types of systems. Based on our experience, however, we argue that considering the complexity, diversity, and rapid evolution of big data systems, fo...
Neutron transport model for standard calculation experiment
International Nuclear Information System (INIS)
Lukhminskij, B.E.; Lyutostanskij, Yu.S.; Lyashchuk, V.I.; Panov, I.V.
1989-01-01
The neutron transport calculation algorithms in complex composition media with a predetermined geometry are realized by the multigroups representations within Monte Carlo methods in the MAMONT code. The code grade was evaluated with benchmark experiments comparison. The neutron leakage spectra calculations in the spherical-symmetric geometry were carried out for iron and polyethylene. The MAMONT code utilization for metrological furnishes of the geophysics tasks is proposed. The code is orientated towards neutron transport and secondary nuclides accumulation calculations in blankets and geophysics media. 7 refs.; 2 figs
International Nuclear Information System (INIS)
Hoffman, E.L.; Ammerman, D.J.
1995-04-01
A series of tests investigating dynamic pulse buckling of a cylindrical shell under axial impact is compared to several 2D and 3D finite element simulations of the event. The purpose of the work is to investigate the performance of various analysis codes and element types on a problem which is applicable to radioactive material transport packages, and ultimately to develop a benchmark problem to qualify finite element analysis codes for the transport package design industry. Four axial impact tests were performed on 4 in-diameter, 8 in-long, 304 L stainless steel cylinders with a 3/16 in wall thickness. The cylinders were struck by a 597 lb mass with an impact velocity ranging from 42.2 to 45.1 ft/sec. During the impact event, a buckle formed at each end of the cylinder, and one of the two buckles became unstable and collapsed. The instability occurred at the top of the cylinder in three tests and at the bottom in one test. Numerical simulations of the test were performed using the following codes and element types: PRONTO2D with axisymmetric four-node quadrilaterals; PRONTO3D with both four-node shells and eight-node hexahedrons; and ABAQUS/Explicit with axisymmetric two-node shells and four-node quadrilaterals, and 3D four-node shells and eight-node hexahedrons. All of the calculations are compared to the tests with respect to deformed shape and impact load history. As in the tests, the location of the instability is not consistent in all of the calculations. However, the calculations show good agreement with impact load measurements with the exception of an initial load spike which is proven to be the dynamic response of the load cell to the impact. Finally, the PRONIT02D calculation is compared to the tests with respect to strain and acceleration histories. Accelerometer data exhibited good qualitative agreement with the calculations. The strain comparisons show that measurements are very sensitive to gage placement
Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk; Brink, Carsten; Clivio, Alessandro; Knöös, Tommy; Nicolini, Giorgia; Cozzi, Luca
2007-03-01
A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam with modified Batho correction; Nucletron Helax-TMS: the collapsed cone and the pencil beam with equivalent path length correction; CMS XiO: the multigrid superposition and the fast Fourier transform convolution; Philips Pinnacle: the collapsed cone. Monte Carlo simulations (MC) performed with the EGSnrc codes BEAMnrc and DOSxyznrc from NRCC in Ottawa were used as a benchmark. The study was carried out in simple geometrical water phantoms (ρ = 1.00 g cm-3) with inserts of different densities simulating light lung tissue (ρ = 0.035 g cm-3), normal lung (ρ = 0.20 g cm-3) and cortical bone tissue (ρ = 1.80 g cm-3). Experiments were performed for low- and high-energy photon beams (6 and 15 MV) and for square (13 × 13 cm2) and elongated rectangular (2.8 × 13 cm2) fields. Analysis was carried out on the basis of depth dose curves and transverse profiles at several depths. Assuming the MC data as reference, γ index analysis was carried out distinguishing between regions inside the non-water inserts or inside the uniform water. For this study, a distance to agreement was set to 3 mm while the dose difference varied from 2% to 10%. In general all algorithms based on pencil-beam convolutions showed a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (ρ = 0.035 g cm-3), enhanced for the most energetic beam. For denser, and more clinical, densities a better agreement among the sophisticated algorithms with respect to MC was observed.
Energy Technology Data Exchange (ETDEWEB)
Hull, R.N. [JAYCOR, Vienna, VA (United States)]|[Oak Ridge National Lab., TN (United States); Suter, G.W. II [Oak Ridge National Lab., TN (United States)
1994-06-01
Because a hazardous waste site may contain hundreds of chemicals, it is important to screen contaminants of potential concern for the ecological risk assessment. Often this screening is done as part of a Screening Assessment, the purpose of which is to evaluate the available data, identify data gaps, and screen contaminants of potential concern. Screening may be accomplished by using a set of toxicological benchmarks. These benchmarks are helpful in determining whether contaminants warrant further assessment or are at a level that requires no further attention. If a chemical concentration or the reported detection limit exceeds a proposed lower benchmark, more analysis is needed to determine the hazards posed by that chemical. If, however, the chemical concentration falls below the lower benchmark value, the chemical may be eliminated from further study. This report briefly describes three categories of approaches to the development of sediment quality benchmarks. These approaches are based on analytical chemistry, toxicity test and field survey data. A fourth integrative approach incorporates all three types of data. The equilibrium partitioning approach is recommended for screening nonpolar organic contaminants of concern in sediments. For inorganics, the National Oceanic and Atmospheric Administration has developed benchmarks that may be used for screening. There are supplemental benchmarks from the province of Ontario, the state of Wisconsin, and US Environmental Protection Agency Region V. Pore water analysis is recommended for polar organic compounds; comparisons are then made against water quality benchmarks. This report is an update of a prior report. It contains revised ER-L and ER-M values, the five EPA proposed sediment quality criteria, and benchmarks calculated for several nonionic organic chemicals using equilibrium partitioning.
Satellite observations of stratospheric hydrogen fluoride and comparisons with SLIMCAT calculations
Harrison, Jeremy J.; Chipperfield, Martyn P.; Boone, Christopher D.; Dhomse, Sandip S.; Bernath, Peter F.; Froidevaux, Lucien; Anderson, John; Russell, James, III
2016-08-01
The vast majority of emissions of fluorine-containing molecules are anthropogenic in nature, e.g. chlorofluorocarbons (CFCs), hydrochlorofluorocarbons (HCFCs), and hydrofluorocarbons (HFCs). Many of these fluorine-containing species deplete stratospheric ozone and are regulated by the Montreal Protocol. Once in the atmosphere they slowly degrade, ultimately leading to the formation of hydrogen fluoride (HF), the dominant reservoir of stratospheric fluorine due to its extreme stability. Monitoring the growth of stratospheric HF is therefore an important marker for the success of the Montreal Protocol. We report the comparison of global distributions and trends of HF measured in the Earth's atmosphere by the satellite remote-sensing instruments ACE-FTS (Atmospheric Chemistry Experiment Fourier transform spectrometer), which has been recording atmospheric spectra since 2004, and HALOE (HALogen Occultation Experiment), which recorded atmospheric spectra between 1991 and 2005, with the output of SLIMCAT, a state-of-the-art three-dimensional chemical transport model. In general the agreement between observation and model is good, although the ACE-FTS measurements are biased high by ˜ 10 % relative to HALOE. The observed global HF trends reveal a substantial slowing down in the rate of increase of HF since the 1990s: 4.97 ± 0.12 % year-1 (1991-1997; HALOE), 1.12 ± 0.08 % year-1 (1998-2005; HALOE), and 0.52 ± 0.03 % year-1 (2004-2012; ACE-FTS). In comparison, SLIMCAT calculates trends of 4.01, 1.10, and 0.48 % year-1, respectively, for the same periods; the agreement is very good for all but the earlier of the two HALOE periods. Furthermore, the observations reveal variations in the HF trends with latitude and altitude; for example, between 2004 and 2012 HF actually decreased in the Southern Hemisphere below ˜ 35 km. An additional SLIMCAT simulation with repeating meteorology for the year 2000 produces much cleaner trends in HF with minimal variations with latitude
Satellite observations of stratospheric hydrogen fluoride and comparisons with SLIMCAT calculations
Directory of Open Access Journals (Sweden)
J. J. Harrison
2016-08-01
Full Text Available The vast majority of emissions of fluorine-containing molecules are anthropogenic in nature, e.g. chlorofluorocarbons (CFCs, hydrochlorofluorocarbons (HCFCs, and hydrofluorocarbons (HFCs. Many of these fluorine-containing species deplete stratospheric ozone and are regulated by the Montreal Protocol. Once in the atmosphere they slowly degrade, ultimately leading to the formation of hydrogen fluoride (HF, the dominant reservoir of stratospheric fluorine due to its extreme stability. Monitoring the growth of stratospheric HF is therefore an important marker for the success of the Montreal Protocol. We report the comparison of global distributions and trends of HF measured in the Earth's atmosphere by the satellite remote-sensing instruments ACE-FTS (Atmospheric Chemistry Experiment Fourier transform spectrometer, which has been recording atmospheric spectra since 2004, and HALOE (HALogen Occultation Experiment, which recorded atmospheric spectra between 1991 and 2005, with the output of SLIMCAT, a state-of-the-art three-dimensional chemical transport model. In general the agreement between observation and model is good, although the ACE-FTS measurements are biased high by ∼ 10 % relative to HALOE. The observed global HF trends reveal a substantial slowing down in the rate of increase of HF since the 1990s: 4.97 ± 0.12 % year−1 (1991–1997; HALOE, 1.12 ± 0.08 % year−1 (1998–2005; HALOE, and 0.52 ± 0.03 % year−1 (2004–2012; ACE-FTS. In comparison, SLIMCAT calculates trends of 4.01, 1.10, and 0.48 % year−1, respectively, for the same periods; the agreement is very good for all but the earlier of the two HALOE periods. Furthermore, the observations reveal variations in the HF trends with latitude and altitude; for example, between 2004 and 2012 HF actually decreased in the Southern Hemisphere below ∼ 35 km. An additional SLIMCAT simulation with repeating meteorology for the year 2000
Benchmarking infrastructure for mutation text mining.
Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo
2014-02-25
Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.
Benchmarking for Cost Improvement. Final report
Energy Technology Data Exchange (ETDEWEB)
1993-09-01
The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.
Benchmarking infrastructure for mutation text mining
2014-01-01
Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600
Goponov, Yu. A.; Laktionova, S. A.; Sidnin, M. A.; Vnukov, I. E.
2017-07-01
To evaluate and improve the previously proposed method of calculating diffracted photon yields in thin perfect crystals, a comparison between calculated and experimental results in wide range of photons and electrons energy was carried out. It is shown that the proposed method describes all investigated experimental results for bremsstrahlung diffraction and transition radiation one with an error less than ten-fifteen percent. Consequently, the method may be used for calculation of the electron beam divergence influence on the diffracted transition radiation angular distribution.
Parton Distribution Benchmarking with LHC Data
Ball, Richard D.; Carrazza, Stefano; Debbio, Luigi Del; Forte, Stefano; Gao, Jun; Hartland, Nathan; Huston, Joey; Nadolsky, Pavel; Rojo, Juan; Stump, Daniel; Thorne, Robert S.; Yuan, C. -P.
2012-01-01
We present a detailed comparison of the most recent sets of NNLO PDFs from the ABM, CT, HERAPDF, MSTW and NNPDF collaborations. We compare parton distributions at low and high scales and parton luminosities relevant for LHC phenomenology. We study the PDF dependence of LHC benchmark inclusive cross
International Nuclear Information System (INIS)
Fragopoulou, M.; Manolopoulou, M.; Stoulos, S.; Golovatyuk, S.; Krivopustov, M.I.; Sosnin, A.N.; Westmeier, W.; Brandt, R.; Debeauvais, M.; Zamani Valasiadou, M.
2005-01-01
Thick Pb targets of different lengths were irradiated by 1GeV protons at the Nuclotron accelerator of the High Energy Laboratory, JINR, Dubna. To favor transmutation via (n,γ) reactions a paraffin moderator is used. Solid-state nuclear track detectors (SSNTDs) measured neutron distribution. A comparison of experimental results with calculation on the moderator surface is given
Directory of Open Access Journals (Sweden)
Cindy Dianita
2016-04-01
Full Text Available One of the key issues in the pipeline design is wall thickness calculation. This paper highlights a comparison of wall thickness calculation methods of submarine gas pipeline based on Norwegian Standard (DNV-OS-F101, Indonesian Standard SNI 3474 which refers to American Standard(ASME B31.8, and Russian Standard (VN39-1.9-005-98. A calculation of wall thickness for a submarine gas pipeline in Indonesia (pressure 12 MPa, external diameter 668 mm gives the results of 18.2 mm (VN39-1.9-005-98, 16 mm (ASME B31.8, and 13.5 mm (DNV-OS-F101.The design formula of hoop stress due to internal pressure is interpreted in different ways for every standard. Only Norwegian Standard requires calculating hoop stresses in the inner surface, which leads to a decreased value of the wall thickness. Furthermore, the calculation of collapse factor dueto external pressure is only regulated in Americanand Norwegian Standards while Russian Standard uses that factor as an intermediate parameter in calculating local buckling. For propagation buckling, either Russian or American Standard explains empirical formula of critical hydrostatics pressure as the input in propagation buckling calculation. This formula is almost similar to the empirical formula of Norwegian Standard. From the comparison of these standards, DNV OS-F101 gives more stringent requirements than others
Supermarket Refrigeration System - Benchmark for Hybrid System Control
DEFF Research Database (Denmark)
Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal
2007-01-01
This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...... to a number of constrains. The objective is to develop an efficient and optimal control strategy....
Aerodynamic benchmarking of the DeepWind design
DEFF Research Database (Denmark)
Bedon, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge
The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...
Supermarket Refrigeration System - Benchmark for Hybrid System Control
DEFF Research Database (Denmark)
Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal
2007-01-01
This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...
Heusch, Philipp; Wittsack, Hans-Jörg; Pentang, Gael; Buchbender, Christian; Miese, Falk; Schek, Julia; Kröpil, Patric; Antoch, Gerald; Lanzman, Rotem S
2013-12-01
Biexponential analysis has been used increasingly to obtain contributions of both diffusion and microperfusion to the signal decay in diffusion-weighted imaging DWI of different parts of the body. To compare biexponential diffusion parameters of transplanted kidneys obtained with three different calculation methods. DWI was acquired in 15 renal allograft recipients (eight men, seven women; mean age, 52.4 ± 14.3 years) using a paracoronal EPI sequence with 16 b-values (b = 0-750 s/mm(2)) and six averages at 1.5T. No respiratory gating was used. Three different calculation methods were used for the calculation of biexponential diffusion parameters: Fp, ADCP, and ADCD were calculated without fixing any parameter a priori (calculation method 1); ADCP was fixed to 12.0 µm(2)/ms, whereas Fp and ADCD were calculated using the biexponential model (calculation method 2); multistep approach with monoexponential fitting of the high b-value portion (b ≥ 250 s/mm(2)) for determination of ADCD and assessment of the low b intercept for determination of Fp (calculation method 3). For quantitative analysis, ROI measurements were performed on the according parameter maps. Mean ADCD values of the renal cortex using calculation method 1 were significantly lower than using calculation methods 2 and 3 (P < 0.001). There was a significant correlation between calculation methods 1 and 2 (r = 0.69 (P < 0.005) and calculation methods 1 and 3 (r = 0.59; P < 0.05) as well as calculation methods 2 and 3 (r = 0.98; P < 0.001). Mean Fp values of the renal cortex were higher with calculation method 1 than with calculation methods 2 and 3 (P < 0.001). For Fp, only the correlation between calculation methods 2 and 3 was significant (r = 0.98; P < 0.001). Biexponential diffusion parameters differ significantly depending on the calculation methods used for their calculation.
Rath, Emma M; Tessier, Dominique; Campbell, Alexander A; Lee, Hong Ching; Werner, Tim; Salam, Noeris K; Lee, Lawrence K; Church, W Bret
2013-03-27
Helical membrane proteins are vital for the interaction of cells with their environment. Predicting the location of membrane helices in protein amino acid sequences provides substantial understanding of their structure and function and identifies membrane proteins in sequenced genomes. Currently there is no comprehensive benchmark tool for evaluating prediction methods, and there is no publication comparing all available prediction tools. Current benchmark literature is outdated, as recently determined membrane protein structures are not included. Current literature is also limited to global assessments, as specialised benchmarks for predicting specific classes of membrane proteins were not previously carried out. We present a benchmark server at http://sydney.edu.au/pharmacy/sbio/software/TMH_benchmark.shtml that uses recent high resolution protein structural data to provide a comprehensive assessment of the accuracy of existing membrane helix prediction methods. The server further allows a user to compare uploaded predictions generated by novel methods, permitting the comparison of these novel methods against all existing methods compared by the server. Benchmark metrics include sensitivity and specificity of predictions for membrane helix location and orientation, and many others. The server allows for customised evaluations such as assessing prediction method performances for specific helical membrane protein subtypes.We report results for custom benchmarks which illustrate how the server may be used for specialised benchmarks. Which prediction method is the best performing method depends on which measure is being benchmarked. The OCTOPUS membrane helix prediction method is consistently one of the highest performing methods across all measures in the benchmarks that we performed. The benchmark server allows general and specialised assessment of existing and novel membrane helix prediction methods. Users can employ this benchmark server to determine the most
Benchmarking the Netherlands. Benchmarking for growth
International Nuclear Information System (INIS)
2003-01-01
This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout
PNNL Information Technology Benchmarking
Energy Technology Data Exchange (ETDEWEB)
DD Hostetler
1999-09-08
Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.
Comparison of the Astigmatic Power of Toric Intraocular Lenses Using Three Toric Calculators.
Park, Hyun Ju; Lee, Hun; Woo, Young Jae; Kim, Eung Kweon; Seo, Kyoung Yul; Kim, Ha Yan; Kim, Tae-im
2015-07-01
To compare the astigmatic power of toric intraocular lenses (IOLs) obtained from the AcrySof, TECNIS, and iTrace toric calculator in patients with preoperative with-the-rule (WTR) or against-the-rule (ATR) corneal astigmatism. Fifty eyes with cataract and corneal astigmatism greater than 0.75 diopters were enrolled in each group (WTR and ATR). Keratometric values were measured using autokeratometry, an IOLMaster, and an iTrace, which incorporated corneal topography and ray-tracing aberrometry. Based on measured keratometric values, the astigmatic power of each toric IOL was calculated using three toric calculators. Bland-Altman plots showed good agreement between six pairwise corneal astigmatism values in both groups. The TECNIS calculator tended to suggest a higher astigmatic power of the toric IOL than the AcrySof calculator. With the higher astigmatism and keratometric values from the IOLMaster, in both groups, calculations from the AcrySof and TECNIS calculators resulted in higher calculated astigmatic powers than those from same calculators with autokeratometry-measured values, demonstrating good agreement. With the higher calculated astigmatic power values, the values from the iTrace toric calculator using keratometric values obtained from iTrace ray tracing wavefront aberrometry or iTrace simulated keratometry showed fair to moderate agreement with those from the other calculator-keratometry pairs in both groups. To achieve the best refractive outcome after toric IOL implantation, understanding the differences in keratometric values between instruments and in calculated astigmatic power among toric calculator programs is necessary. Moreover, systemic analysis of each toric calculator in conjunction with postoperative data is required.
Helium generation reaction rates for 6Li and 10B in benchmark facilities
International Nuclear Information System (INIS)
Farrar, Harry IV; Oliver, B.M.; Lippincott, E.P.
1980-01-01
The helium generation rates for 10 B and 6 Li have been measured in two benchmark reactor facilities having neutron spectra similar to those found in a breeder reactor. The irradiations took place in the Coupled Fast Reactivity Measurements Facility (CFRMF) and in the 10% enriched 235 U critical assembly, BIG-10. The helium reaction rates were obtained by precise high-sensitivity gas mass spectrometric analyses of the helium content of numerous small samples. Comparison of these reaction rates with other reaction rates measured in the same facilities, and with rates calculated from published cross sections and from best estimates of the neutron spectral shapes, indicate significant discrepancies in the calculated values. Additional irradiations in other benchmark facilities have been undertaken to better determine the energy ranges where the discrepancies lie
Benchmarking the UAF Tsunami Code
Nicolsky, D.; Suleimani, E.; West, D.; Hansen, R.
2008-12-01
We have developed a robust numerical model to simulate propagation and run-up of tsunami waves in the framework of non-linear shallow water theory. A temporal position of the shoreline is calculated using the free-surface moving boundary condition. The numerical code adopts a staggered leapfrog finite-difference scheme to solve the shallow water equations formulated for depth-averaged water fluxes in spherical coordinates. To increase spatial resolution, we construct a series of telescoping embedded grids that focus on areas of interest. For large scale problems, a parallel version of the algorithm is developed by employing a domain decomposition technique. The developed numerical model is benchmarked in an exhaustive series of tests suggested by NOAA. We conducted analytical and laboratory benchmarking for the cases of solitary wave runup on simple and composite beaches, run-up of a solitary wave on a conical island, and the extreme runup in the Monai Valley, Okushiri Island, Japan, during the 1993 Hokkaido-Nansei-Oki tsunami. Additionally, we field-tested the developed model to simulate the November 15, 2006 Kuril Islands tsunami, and compared the simulated water height to observations at several DART buoys. In all conducted tests we calculated a numerical solution with an accuracy recommended by NOAA standards. In this work we summarize results of numerical benchmarking of the code, its strengths and limits with regards to reproduction of fundamental features of coastal inundation, and also illustrate some possible improvements. We applied the developed model to simulate potential inundation of the city of Seward located in Resurrection Bay, Alaska. To calculate an aerial extent of potential inundation, we take into account available near-shore bathymetry and inland topography on a grid of 15 meter resolution. By choosing several scenarios of potential earthquakes, we calculated the maximal aerial extent of Seward inundation. As a test to validate our model, we
DEFF Research Database (Denmark)
Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela
This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm...... survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...
DEFF Research Database (Denmark)
Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela
survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions......This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...
Slater, E.; Whalley, L.; Woodward-Massey, R.; Ye, C.; Crilley, L.; Kramer, L. J.; Vu, T.; Bloss, W.; Squires, F. A.; Dunmore, R.; Lee, J. D.; Heard, D. E.
2017-12-01
In Beijing poor urban air quality has a demonstrable effect on human health. During the wintertime, anthropogenic emissions from fossil fuel combustion can lead to high aerosol loadings and haze events. A high oxidation capacity on hazy days has previously been inferred from the significant contribution secondary organic aerosol (SOA) make to total PM2.5 (Huang et al., 2014). The hydroxyl radical (OH) mediates virtually all of the oxidative chemistry in the atmosphere, being responsible for the transformation of primary emissions into secondary pollutants such as NO2, O3 and SOA. Understanding the sources and sinks of OH in the atmosphere is essential in improving predictions of the lifetimes and chemical processing of pollutants and their transport within urban areas. We will present OH and HO2 measurements made in central Beijing during the recent `An Integrated Study of AIR Pollution PROcesses in Beijing (AIRPRO)' project which took place in November and December 2016. OH measurements were made using the FAGE (Fluorescence Assay by Gas Expansion) technique, with the use of an inlet pre injector (IPI) which provides an alternative method to determine the background by injecting a scavenger (propane) to remove ambient OH. The OH measurements were made over a range of meteorological conditions including a number of haze days, with the average maximum OH concentration measured for the campaign being 2.5 x 106 cm-3 and for haze days the OH concentration reached levels of 3.5 x 106 cm-3 which is comparable to OH levels in non-haze days. We will compare the OH observations to steady state calculations constrained to the total OH reactivity and key OH precursors that were measured alongside OH. Through this comparison we will identify the major OH sources which sustain the wintertime oxidation capacity. The current understanding is that gas-phase oxidation via the OH radical becomes less important in haze events due to lower light and ozone levels, making photochemistry
Present status of International Criticality Safety Benchmark Evaluation Project (ICSBEP)
International Nuclear Information System (INIS)
Miyoshi, Yoshinori
2000-01-01
The International Criticality Safety Evaluation Project, ICSBEP was designed to identify and evaluate a comprehensive set of critical experiment benchmark data. Compilation of the data into a standardized format are made by reviewing original and subsequently revised documentation for calculating each experiment with standard criticality safety codes. Five handbooks of evaluated criticality safety benchmark experiments have been published since 1995. (author)
Computational benchmark problem for deep penetration in iron
International Nuclear Information System (INIS)
Hendricks, J.S.; Carter, L.L.
1980-01-01
A calculational benchmark problem which is simple to model and easy to interpret is described. The benchmark consists of monoenergetic 2-, 4-, or 40-MeV neutrons normally incident upon a 3-m-thick pure iron slab. Currents, fluxes, and radiation doses are tabulated throughout the slab
Energy Technology Data Exchange (ETDEWEB)
Zimmerman, B.E., E-mail: bez@nist.go [Physics Laboratory, National Institute of Standards and Technology, Gaithersburg, MD 20899-8462 (United States); Altzitzoglou, T. [European Commission, Joint Research Centre, Institute for Reference Materials and Measurements (IRMM), Retieseweg 111, B-2440 Geel (Belgium); Rodrigues, D. [Laboratorio de Metrologia de Radioisotopos, Comision Nacional de Energia Atomica, Buenos Aires (Argentina); Broda, R. [Institute of Atomic Energy, Radioisotope Centre POLATOM, Otwock-Swierk (Poland); Cassette, P. [LNE-Laboratoire National Henri Becquerel, CEA-Saclay, Gif-sur-Yvette Cedex (France); Mo, L. [Australian Nuclear Science and Technology Organization, Menai (Australia); Ratel, G. [Bureau International des Poids et Mesures (BIPM), Pavillon de Breteuil, Sevres Cedex (France); Simpson, B.; Wyngaardt, W. van [National Metrology Institute of South Africa, Rosebank, Cape Town (South Africa); Waetjen, C. [National Institute of R and D for Physics and Nuclear Engineering Horia Hulubei, Magurele (Romania)
2010-07-15
A comparison exercise for data analysis was recently conducted by the Liquid Scintillation Counting Working Group (LSCWG) of the International Committee on Radionuclide Metrology (ICRM) to evaluate the uncertainties involved in applying different analysis methodologies (including computer programs) for the triple-to-double coincidence ratio (TDCR) method. The goals of the comparison were to (1) study differences in calculation results from different TDCR analysis programs, (2) investigate differences in analysis techniques and uncertainty assessment philosophies between laboratories, and (3) study the effect of not taking asymmetry of photomultiplier tube (PMT) efficiencies into account on the calculated activity. To achieve this, a single set of TDCR data for the pure beta emitter {sup 99}Tc, was distributed to the participants, who analyzed the data according to their normal procedures and report the activity concentration of the {sup 99}Tc solution from their results. The results indicate that the presently used programs are generally able to calculate the same activity values, assuming that the correct input parameters are used and that not taking PMT asymmetry into account in the calculations can lead to significant (0.6% for {sup 99}Tc) errors in reported results. The comparison also highlighted the need for a more rigorous approach to estimating and reporting uncertainties.
Energy Technology Data Exchange (ETDEWEB)
Suter, G.W., II
1993-01-01
One of the initial stages in ecological risk assessment of hazardous waste sites is the screening of contaminants to determine which, if any, of them are worthy of further consideration; this process is termed contaminant screening. Screening is performed by comparing concentrations in ambient media to benchmark concentrations that are either indicative of a high likelihood of significant effects (upper screening benchmarks) or of a very low likelihood of significant effects (lower screening benchmarks). Exceedance of an upper screening benchmark indicates that the chemical in question is clearly of concern and remedial actions are likely to be needed. Exceedance of a lower screening benchmark indicates that a contaminant is of concern unless other information indicates that the data are unreliable or the comparison is inappropriate. Chemicals with concentrations below the lower benchmark are not of concern if the ambient data are judged to be adequate. This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids, the lowest EC20 for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical
Benchmarking for Best Practice
Zairi, Mohamed
1998-01-01
Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l
HPCG Benchmark Technical Specification
Energy Technology Data Exchange (ETDEWEB)
Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)
2013-10-01
The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.
Calculated neutron-induced cross sections for 52Cr from 1 to 20 MeV and comparisons with experiments
International Nuclear Information System (INIS)
Hetrick, D.M.; Fu, C.Y.; Larson, D.C.
1987-01-01
Nuclear model codes were used to compute cross sections for neutron-induced reactions on 52 Cr for incident energies from 1 to 20 MeV. The input parameters for the model codes were determined through analysis of experimental data in this energy region. Discussion of the models used, the input data, the resulting calculations, extensive comparisons to measured data, and comparisons to the Evaluated Nuclear Data File (ENDF/B-V) for Cr (MAT 1324) are included in this report. 103 refs., 67 figs., 12 tabs
Calculation methods for SPF for heat pump systems for comparison, system choice and dimensioning
Energy Technology Data Exchange (ETDEWEB)
Nordman, Roger; Andersson, Kajsa; Axell, Monica; Lindahl, Markus
2010-09-15
In this project, results from field measurements of heat pumps have been collected and summarised. Also existing calculation methods have been compared and summarised. Analyses have been made on how the field measurements compare to existing calculation models for heat pumps Seasonal Performance Factor (SPF), and what deviations may depend on. Recommendations for new calculation models are proposed, which include combined systems (e.g. solar - HP), capacity controlled heat pumps and combined DHW and heating operation
International Nuclear Information System (INIS)
Eastoe, J.; Penfold, J.
1992-07-01
A comparison of the calculation of neutron reflectivity profiles R(Q) by optical matrix methods and a multilayer method, based on the kinematic approximation, has been made. The latter method provides a fast simple route to calculating R(Q) and may be used with confidence for systems of total thickness ≤200 ''A'' for a scattering length density difference ''Delta'' of the order of 1.0 x 10 -5 A -2 . For thicker layers the approximation is poor and the optical matrix method must be used for an accurate evaluation of such reflectivity data over a wide Q range. (author)
International Nuclear Information System (INIS)
Vergnaud, T.; Bourdet, L.; Gonnord, J.; Nimal, J.C.; Champion, G.
1984-01-01
Conception of a reactor building requires large openings in the primary concrete shield for a postulated loss-of-coolant accident. Through these openings neutrons escape and produce dose rates in several parts of the reactor building. Some calculations using ANISN, DOT and essentially TRIPOLI-2 codes allow to compute the neutron dose rates at several places such as reactor containment operating floor and containment annulus. Some complementary shields are provided and the instrumentations are placed in area where the dose rate is lower. Comparisons are presented between measurements and calculations
International Nuclear Information System (INIS)
Grimes, S.M.; Poppe, C.H.; Wong, C.; Dalton, B.J.
1978-01-01
Spin cutoff parameters for 28 Si, 29 Si, and 30 P have been deduced from angular distribution measurements for (α,n) reactions on 25 Mg, 26 Mg, and 27 Al. Calculations of the spin cutoff parameter which include the effect of two body interactions are made using the theory of spectral distributions. The results obtained by dividing the space into subspaces and expanding the strength distribution of each in a Gaussian are found to be more reliable than calculating a single Gaussian for the entire space. The comparisons in this study indicate a need for the inclusion of higher moments of the Hamiltonian in the level density expansion
Comparison of measured and calculated doses for narrow MLC defined fields
International Nuclear Information System (INIS)
Lydon, J.; Rozenfeld, A.; Lerch, M.
2002-01-01
Full text: The introduction of Intensity Modulated Radiotherapy (IMRT) has led to the use of narrow fields in the delivery of radiation doses to patients. Such fields are not well characterized by calculation methods commonly used in radiotherapy treatment planning systems. The accuracy of the dose calculation algorithm must therefore be investigated prior to clinical use. This study looked at symmetrical and asymmetrical 0.1 to 3cm wide fields delivered with a Varian CL2100C 6MV photon beam. Measured doses were compared to doses calculated using Pinnacle, the ADAC radiotherapy treatment planning system. Two high resolution methods of measuring dose were used. A MOSFET detector in a water phantom and radiographic film in a solid water phantom with spatial resolutions of 10 and 89μm respectively. Dose calculations were performed using the collapsed cone convolution algorithm in Pinnacle with a 0.1cm dose calculation grid in the MLC direction. The effect of Pinnacle not taking into account the rounded leaf ends was simulated by offsetting the leaves by 0.1cm in the dose calculation. Agreement between measurement and calculation is good for fields of 1cm and wider. However, fields of less than 1cm width can show a significant difference between measurement and calculation
DEFF Research Database (Denmark)
Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius
2006-01-01
takes into account that the available positions of the moving objects are inaccurate, an aspect largely ignored in previous indexing research. The concepts of data and query enlargement are introduced for addressing inaccuracy. As proof of concepts of the benchmark, the paper covers the application...
Benchmarking and Performance Management
Directory of Open Access Journals (Sweden)
Adrian TANTAU
2010-12-01
Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.
Benchmarking of workplace performance
van der Voordt, Theo; Jensen, Per Anker
2017-01-01
This paper aims to present a process model of value adding corporate real estate and facilities management and to discuss which indicators can be used to measure and benchmark workplace performance.
In order to add value to the organisation, the work environment has to provide value for
Comparison of self-consistent calculations of the static polarizability of atoms and molecules
International Nuclear Information System (INIS)
Moullet, I.; Martins, J.L.
1990-01-01
The static dipole polarizabilities and other ground-state properties of H, H 2 , He, Na, and Na 2 are calculated using five different self-consistent schemes: Hartree--Fock, local spin density approximation, Hartree--Fock plus local density correlation, self-interaction-corrected local spin density approximation, and Hartree--Fock plus self-interaction-corrected local density correlation. The inclusion of the self-interaction corrected local spin density approximation in the Hartree--Fock method improves dramatically the calculated dissociation energies of molecules but has a small effect on the calculated polarizabilities. Correcting the local spin density calculations for self-interaction effects improves the calculated polarizability in the cases where the local spin density results are mediocre, and has only a small effect in the cases where the local spin density values are in reasonable agreement with experiment
International Nuclear Information System (INIS)
Blanco, F.; Rosado, J.; Illana, A.; Garcia, G.
2010-01-01
The SCAR and EGAR procedures have been proposed in order to extend to lower energies the applicability of the additivity rule for calculation of electron-molecule total cross sections. Both those approximate treatments arise after considering geometrical screening corrections due to partial overlapping of atoms in the molecule, as seen by the incident electrons. The main features, results and limitations of both treatments are put here in comparison by means of their application to some different sized species.
Directory of Open Access Journals (Sweden)
Hsu M
2011-09-01
Full Text Available David L DeMill1, Majid Moshirfar1, Marcus C Neuffer1, Maylon Hsu1, Shameema Sikder21John A Moran Eye Center, University of Utah, Salt Lake City, UT, USA; 2Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD, USABackground: To compare the average values of the American Society of Cataract and Refractive Surgery (ASCRS and Ocular MD intraocular lens (IOL calculators to assess their accuracy in predicting IOL power in patients with prior laser-in-situ keratomileusis (LASIK or photorefractive keratectomy.Methods: In this retrospective study, data from 21 eyes with previous LASIK or photorefractive keratectomy for myopia and subsequent cataract surgery was used in an IOL calculator comparison. The predicted IOL powers of the Ocular MD SRK/T, Ocular MD Haigis, and ASCRS averages were compared. The Ocular MD average (composed of an average of Ocular MD SRK/T and Ocular MD Haigis and the all calculator average (composed of an average of Ocular MD SRK/T, Ocular MD Haigis, and ASCRS were also compared. Primary outcome measures were mean arithmetic and absolute IOL prediction error, variance in mean arithmetic IOL prediction error, and the percentage of eyes within ±0.50 and ±1.00 D.Results: The Ocular MD SRK/T and Ocular MD Haigis averages produced mean arithmetic IOL prediction errors of 0.57 and –0.61 diopters (D, respectively, which were significantly larger than errors from the ASCRS, Ocular MD, and all calculator averages (0.11, –0.02, and 0.02 D, respectively, all P < 0.05. There was no statistically significant difference between the methods in absolute IOL prediction error, variance, or the percentage of eyes with outcomes within ±0.50 and ±1.00 D.Conclusion: The ASCRS average was more accurate in predicting IOL power than the Ocular MD SRK/T and Ocular MD Haigis averages alone. Our methods using combinations of these averages which, when compared with the individual averages, showed a trend of decreased mean arithmetic IOL
International Nuclear Information System (INIS)
Filges, D.; Enke, M.; Galin, J.
2001-01-01
A renascence of interest for energetic proton induced production of neutrons originates recently by the inception of new projects for target stations of intense spallation neutron sources (like the planned European Spallation Source ESS), accelerator-driven nuclear reactors, nuclear waste transmutation and also the application for radioactive beams. Here we verify the predictive power of transport codes currently on the market by confronting observables and quantities of interest with an exhaustive matrix of benchmark data essentially coming from two experiments being performed at the Cooler Synchrotron COSY at Juelich. Program packages like HERMES, LCS or MCNPX master the prevision of reaction cross sections, hadronic interaction lengths, averaged neutron multiplicities and neutron multiplicity distributions in thick and thin(!) targets for a wide spectrum of incident proton energies, geometrical shapes and materials of the target. While also the observables related to the energy deposition in thick targets are in a good agreement with the model predictions, the production cross section measurements however for light charged particles on thin targets point out that problems exist within these models. (author)
Benchmarking of nuclear economics tools
International Nuclear Information System (INIS)
Moore, Megan; Korinny, Andriy; Shropshire, David; Sadhankar, Ramesh
2017-01-01
Highlights: • INPRO and GIF economic tools exhibited good alignment in total capital cost estimation. • Subtle discrepancies in the cost result from differences in financing and the fuel cycle assumptions. • A common set of assumptions was found to reduce the discrepancies to 1% or less. • Opportunities for harmonisation of economic tools exists. - Abstract: Benchmarking of the economics methodologies developed by the Generation IV International Forum (GIF) and the International Atomic Energy Agency’s International Project on Innovative Nuclear Reactors and Fuel Cycles (INPRO), was performed for three Generation IV nuclear energy systems. The Economic Modeling Working Group of GIF developed an Excel based spreadsheet package, G4ECONS (Generation 4 Excel-based Calculation Of Nuclear Systems), to calculate the total capital investment cost (TCIC) and the levelised unit energy cost (LUEC). G4ECONS is sufficiently generic in the sense that it can accept the types of projected input, performance and cost data that are expected to become available for Generation IV systems through various development phases and that it can model both open and closed fuel cycles. The Nuclear Energy System Assessment (NESA) Economic Support Tool (NEST) was developed to enable an economic analysis using the INPRO methodology to easily calculate outputs including the TCIC, LUEC and other financial figures of merit including internal rate of return, return of investment and net present value. NEST is also Excel based and can be used to evaluate nuclear reactor systems using the open fuel cycle, MOX (mixed oxide) fuel recycling and closed cycles. A Super Critical Water-cooled Reactor system with an open fuel cycle and two Fast Reactor systems, one with a break-even fuel cycle and another with a burner fuel cycle, were selected for the benchmarking exercise. Published data on capital and operating costs were used for economics analyses using G4ECONS and NEST tools. Both G4ECONS and
Dittrich, Birger; Lübben, Jens; Mebs, Stefan; Wagner, Armin; Luger, Peter; Flaig, Ralf
2017-04-03
Amino acid structures are an ideal test set for method-development studies in crystallography. High-resolution X-ray diffraction data for eight previously studied genetically encoding amino acids are provided, complemented by a non-standard amino acid. Structures were re-investigated to study a widely applicable treatment that permits accurate X-H bond lengths to hydrogen atoms to be obtained: this treatment combines refinement of positional hydrogen-atom parameters with aspherical scattering factors with constrained "TLS+INV" estimated hydrogen anisotropic displacement parameters (H-ADPs). Tabulated invariom scattering factors allow rapid modeling without further computations, and unconstrained Hirshfeld atom refinement provides a computationally demanding alternative when database entries are missing. Both should incorporate estimated H-ADPs, as free refinement frequently leads to over-parameterization and non-positive definite H-ADPs irrespective of the aspherical scattering model used. Using estimated H-ADPs, both methods yield accurate and precise X-H distances in best quantitative agreement with neutron diffraction data (available for five of the test-set molecules). This work thus solves the last remaining problem to obtain such results more frequently. Density functional theoretical QM/MM computations are able to play the role of an alternative benchmark to neutron diffraction. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
DEFF Research Database (Denmark)
Taghizadeh, Alireza; Mørk, Jesper; Chung, Il-Sug
2014-01-01
Four different numerical methods for calculating the quality factor and resonance wavelength of a nano or micro photonic cavity are compared. Good agreement was found for a wide range of quality factors. Advantages and limitations of the different methods are discussed.......Four different numerical methods for calculating the quality factor and resonance wavelength of a nano or micro photonic cavity are compared. Good agreement was found for a wide range of quality factors. Advantages and limitations of the different methods are discussed....
Energy Technology Data Exchange (ETDEWEB)
Andrade, Maria Celia Ramos; Ludwig, Gerson Otto [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil). Lab. Associado de Plasma]. E-mail: mcr@plasma.inpe.br
2004-07-01
Different bootstrap current formulations are implemented in a self-consistent equilibrium calculation obtained from a direct variational technique in fixed boundary tokamak plasmas. The total plasma current profile is supposed to have contributions of the diamagnetic, Pfirsch-Schlueter, and the neoclassical Ohmic and bootstrap currents. The Ohmic component is calculated in terms of the neoclassical conductivity, compared here among different expressions, and the loop voltage determined consistently in order to give the prescribed value of the total plasma current. A comparison among several bootstrap current models for different viscosity coefficient calculations and distinct forms for the Coulomb collision operator is performed for a variety of plasma parameters of the small aspect ratio tokamak ETE (Experimento Tokamak Esferico) at the Associated Plasma Laboratory of INPE, in Brazil. We have performed this comparison for the ETE tokamak so that the differences among all the models reported here, mainly regarding plasma collisionality, can be better illustrated. The dependence of the bootstrap current ratio upon some plasma parameters in the frame of the self-consistent calculation is also analysed. We emphasize in this paper what we call the Hirshman-Sigmar/Shaing model, valid for all collisionality regimes and aspect ratios, and a fitted formulation proposed by Sauter, which has the same range of validity but is faster to compute than the previous one. The advantages or possible limitations of all these different formulations for the bootstrap current estimate are analysed throughout this work. (author)
Performance Targets and External Benchmarking
DEFF Research Database (Denmark)
Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.
Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...
Shielding benchmark tests of JENDL-3
International Nuclear Information System (INIS)
Kawai, Masayoshi; Hasegawa, Akira; Ueki, Kohtaro; Yamano, Naoki; Sasaki, Kenji; Matsumoto, Yoshihiro; Takemura, Morio; Ohtani, Nobuo; Sakurai, Kiyoshi.
1994-03-01
The integral test of neutron cross sections for major shielding materials in JENDL-3 has been performed by analyzing various shielding benchmark experiments. For the fission-like neutron source problem, the following experiments are analyzed: (1) ORNL Broomstick experiments for oxygen, iron and sodium, (2) ASPIS deep penetration experiments for iron, (3) ORNL neutron transmission experiments for iron, stainless steel, sodium and graphite, (4) KfK leakage spectrum measurements from iron spheres, (5) RPI angular neutron spectrum measurements in a graphite block. For D-T neutron source problem, the following two experiments are analyzed: (6) LLNL leakage spectrum measurements from spheres of iron and graphite, and (7) JAERI-FNS angular neutron spectrum measurements on beryllium and graphite slabs. Analyses have been performed using the radiation transport codes: ANISN(1D Sn), DIAC(1D Sn), DOT3.5(2D Sn) and MCNP(3D point Monte Carlo). The group cross sections for Sn transport calculations are generated with the code systems PROF-GROUCH-G/B and RADHEAT-V4. The point-wise cross sections for MCNP are produced with NJOY. For comparison, the analyses with JENDL-2 and ENDF/B-IV have been also carried out. The calculations using JENDL-3 show overall agreement with the experimental data as well as those with ENDF/B-IV. Particularly, JENDL-3 gives better results than JENDL-2 and ENDF/B-IV for sodium. It has been concluded that JENDL-3 is very applicable for fission and fusion reactor shielding analyses. (author)
On the Surface Free Energy of PVC/EVA Polymer Blends: Comparison of Different Calculation Methods.
Michalski; Hardy; Saramago
1998-12-01
The surface free energy of polymeric films of polyvinylchloride (PVC) + poly(ethylene-co-vinylacetate) (EVA) blends was calculated using the van Oss treatment (Lifshitz and electron donor-electron acceptor components of surface free energy) and the Owens-Wendt treatment (dispersive and nondispersive components of surface free energy). Surface free energy results were found to be greatly dependent on the calculation method and on the number of standard liquids used for contact angle measurements. The nondispersive/donor-acceptor surface free energy component and the total surface free energy of polymeric films were always higher when the van Oss treatment was used compared to the Owens-Wendt treatment. Conversely, both methods led to similar apolar/Lifshitz components. All the calculation methods were in good agreement for the surface free energy of PVC; however, a discrepancy between the methods arose as EVA content in the blends increased. It seems that there is not yet a definite solution for the calculation of solid surface free energy. Further developments of existing models are needed in order to gain consistency when calculating this important physicochemical quantity. Copyright 1998 Academic Press.
Choueri, R B; Cesar, A; Abessa, D M S; Torres, R J; Morais, R D; Riba, I; Pereira, C D S; Nascimento, M R L; Mozeto, A A; DelValls, T A
2009-10-15
We aimed to develop site-specific sediment quality guidelines (SQGs) for two estuarine and port zones in Southeastern Brazil (Santos Estuarine System and Paranaguá Estuarine System) and three in Southern Spain (Ría of Huelva, Bay of Cádiz, and Bay of Algeciras), and compare these values against national and traditionally used international benchmark values. Site-specific SQGs were derived based on sediment physical-chemical, toxicological, and benthic community data integrated through multivariate analysis. This technique allowed the identification of chemicals of concern and the establishment of effects range correlatively to individual concentrations of contaminants for each site of study. The results revealed that sediments from Santos channel, as well as inner portions of the SES, are considered highly polluted (exceeding SQGs-high) by metals, PAHs and PCBs. High pollution by PAHs and some metals was found in São Vicente channel. In PES, sediments from inner portions (proximities of the Ponta do Félix port's terminal and the Port of Paranaguá) are highly polluted by metals and PAHs, including one zone inside the limits of an environmental protection area. In Gulf of Cádiz, SQGs exceedences were found in Ria of Huelva (all analysed metals and PAHs), in the surroundings of the Port of Cádiz (Bay of Cádiz) (metals), and in Bay of Algeciras (Ni and PAHs). The site-specific SQGs derived in this study are more restricted than national SQGs applied in Brazil and Spain, as well as international guidelines. This finding confirms the importance of the development of site-specific SQGs to support the characterisation of sediments and dredged material. The use of the same methodology to derive SQGs in Brazilian and Spanish port zones confirmed the applicability of this technique with an international scope and provided a harmonised methodology for site-specific SQGs derivation.
Directory of Open Access Journals (Sweden)
ZA Memish
2015-01-01
Full Text Available Background: Exposure to blood-borne pathogens from needle-stick and sharp injuries continues to pose a significant risk to health care workers. These events are of concern because of the risk to transmit blood-borne diseases such as hepatitis B virus, hepatitis C virus, and the human immunodeficiency virus. Objective: To benchmark different risk factors associated with needle-stick incidents among health care workers in the Ministry of Health hospitals in the Kingdom of Saudi Arabia compared to the US hospitals participating in Exposure Prevention Information Network (EPINet ™. Methods: Prospective surveillance of needle-stick and sharp incidents carried out during the year 2012 using EPINet™ ver 1.5 that provides uniform needle stick and sharp injury report form. Results: The annual percutaneous incidents (PIs rate per 100 occupied beds was 3.2 at the studied MOH hospitals. Nurses were the most affected job category by PIs (59.4%. Most PIs happened in patients' wards in the Ministry of Health hospitals (34.6%. Disposable syringes were the most common cause of PIs (47.20%. Most PIs occurred during use of the syringes (36.4%. Conclusion: Among health care workers, nurses and physicians appear especially at risk of exposure to PIs. Important risk factors of injuries include working in patient room, using disposable syringes, devices without safety features. Preventive strategies such as continuous training of health care workers with special emphasis on nurses and physicians, encouragement of reporting of such incidents, observation of sharp handling, their use and implementation of safety devices are warranted.
Energy Technology Data Exchange (ETDEWEB)
2017-08-01
AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the BoomerAMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL and is very similar to the AMG2013 benchmark with additional optimizations. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem with a 27-point stencil, which can be scaled up and is designed to solve a very large problem. A second problem simulates a time dependent problem, in which successively various smnllcr systems are solved.
International Nuclear Information System (INIS)
Smith, M.A.; Tsoulfanidis, N.; Lewis, E.E.; Palmiotti, G.
2001-01-01
Increasing computer power is allowing higher-order angular approximations to replace diffusion theory methods in whole core reactor physics computations. Spherical harmonic (P n ), simplified spherical harmonic (SP n ), and discrete ordinates (S n ) methods are capable of performing such calculations in three dimensions. Most advantages of such transport methods are gained by eliminating fuel assembly homogenization, thus allowing pin powers to be calculated directly. A further step, currently under investigation, is the elimination of spatial homogenization at the pin cell level as well. The fuel-moderator interfaces may be treated explicitly in P n , S n , or SP n calculations by applying triangular finite elements (FEM) to the spatial variables. Early results using a modified form of the VARIANT code, however, indicate that without pin cell homogenization, high-order angular approximations may be required to represent the lattice effects accurately within the whole-core calculations. To examine these lattice effects further, a modified form of VARIANT was created to use the spatial triangular finite element scheme. The program was set up to treat a single heterogeneous pin cell coupled with P n , SP n , or S n angular approximations. Additional modifications replaced the nodal interface approximations with exact reflected boundary conditions to increase the accuracy of the results. Several pressurized water reactor pin cells, taken from a previous benchmark specification, were examined. However, the results shown here focus only on the most severe case, i.e., a pin cell containing 8.7% mixed-oxide enriched fuel. The DRAGON collision probability code was used to collapse a 69-group cross-section library to a more manageable 7-group library that contained cross sections for the fuel-cladding mixture and for the water. Eigenvalue results are shown in Figs. 1 and 2 using the modified VARIANT code with P n , SP n , and S n angular approximations. A 7-group MCNP Monte
Directory of Open Access Journals (Sweden)
Sornek Krzysztof
2016-01-01
Full Text Available The proper design of renewable energy based systems is really important to provide their efficient and safe operation. The aim of this paper is to compare the results obtained during traditional static calculations, with the results of dynamic simulations. For this reason, simulations of solar water heating (SWH system, designed for a typical residential building, were conducted in the TRNSYS (Transient System Simulation Tool. Carried out calculations allowed to determine the heat generation in the discussed system as well as to estimate the efficiency of considered installation. Obtained results were compared with the results from other available tool based on the static calculations. It may be concluded, that using dynamic simulations at the designing stage of renewable energy based systems may help to avoid many exploitation problems (including low efficiency, overheating etc. and allows to provide safe exploitation of such installations.
Comparison of statistical evaluation of criticality calculations for reactors VENUS-F and ALFRED
Directory of Open Access Journals (Sweden)
Janczyszyn Jerzy
2017-01-01
Full Text Available Limitations of correct evaluation of keff in Monte Carlo calculations, claimed in literature, apart from the nuclear data uncertainty, need to be addressed more thoroughly. Respective doubts concern: the proper number of discarded initial cycles, the sufficient number of neutrons in a cycle and the recognition and dealing with the keff bias. Calculations were performed to provide more information on these points with the use of the MCB code, solely for fast cores. We present applied methods and results, such as: calculation results for stability of variance, relation between standard deviation reported by MCNP and this from the dispersion of multiple independent keff values, second order standard deviations obtained from different numbers of grouped results. All obtained results for numbers of discarded initial cycles from 0 to 3000 were analysed leading for interesting conclusions.
Comparison of Calculated and Experimental Results for a Boiling/Condensing Experimental Facility
Energy Technology Data Exchange (ETDEWEB)
Carbajo, Juan J [ORNL; McDuffee, Joel Lee [ORNL; Felde, David K [ORNL
2016-01-01
A new experimental facility for materials irradiation and testing at the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor (HFIR) is being developed. Details of this facility have been presented before [1, 2]. A prototype of this facility, the Thermo-Syphon Test Loop (TSTL) has been built and experimental data have been obtained and analyzed [3, 4]. Pre-test calculations for this facility with the RELAP5-3D code [5] have been presented previously [6] as well as other calculations [7, 8] with the TRACE code [9]. The results of both codes were very different [7]. RELAP5-3D predicted much higher pressures and temperatures than TRACE. This paper compares calculated results with the TSTL experimental data.
RESRAD for Radiological Risk Assessment. Comparison with EPA CERCLA Tools - PRG and DCC Calculators
Energy Technology Data Exchange (ETDEWEB)
Yu, C. [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, J. -J. [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, S. [Argonne National Lab. (ANL), Argonne, IL (United States)
2015-07-01
The purpose of this report is two-fold. First, the risk assessment methodology for both RESRAD and the EPA’s tools is reviewed. This includes a review of the EPA’s justification for 2 using a dose-to-risk conversion factor to reduce the dose-based protective ARAR from 15 to 12 mrem/yr. Second, the models and parameters used in RESRAD and the EPA PRG and DCC Calculators are compared in detail, and the results are summarized and discussed. Although there are suites of software tools in the RESRAD family of codes and the EPA Calculators, the scope of this report is limited to the RESRAD (onsite) code for soil contamination and the EPA’s PRG and DCC Calculators also for soil contamination.
Comparison of computer codes for calculating dynamic loads in wind turbines
Spera, D. A.
1978-01-01
The development of computer codes for calculating dynamic loads in horizontal axis wind turbines was examined, and a brief overview of each code was given. The performance of individual codes was compared against two sets of test data measured on a 100 KW Mod-0 wind turbine. All codes are aeroelastic and include loads which are gravitational, inertial and aerodynamic in origin.
Timothy G. Wade; James D. Wickham; Maliha S. Nash; Anne C. Neale; Kurt H. Riitters; K. Bruce Jones
2003-01-01
AbstractGIS-based measurements that combine native raster and native vector data are commonly used in environmental assessments. Most of these measurements can be calculated using either raster or vector data formats and processing methods. Raster processes are more commonly used because they can be significantly faster computationally...
R. van Mastrigt (Ron); D.J. Griffiths (Derek)
1987-01-01
textabstractParameters describing the contractility of the urinary bladder can be calculated from both isometric contractions and pressure-flow studies. The first method has the advantage of making very little demand either on the patient or on the urodynamicist, and the disadvantage of yielding a
International comparison calculations for a BWR lattice with adjacent gadolinium pins
International Nuclear Information System (INIS)
Maeder, C.; Wydler, P.
1984-09-01
The results of burnup calculations for a simplified BWR fuel element with two adjacent gadolinium rods are presented and discussed. Ten complete solutions were contributed by Denmark, France, Italy (3), Japan (3), Switzerland and the UK. Partial results obtained from Poland and the USA are included in an Appendix. (Auth.)
Comparison of Two Prostate Cancer Risk Calculators that Include the Prostate Health Index
M.J. Roobol-Bouts (Monique); M.M. Vedder (Moniek); D. Nieboer (Daan); A. Houlgatte (Alain); S. Vincendeau (Sébastien); M. Lazzeri (Massimo); G. Guazzoni (Giorgio); C. Stephan (Carsten); A. Semjonow (Axel); A. Haese (Alexander); M. Graefen (Markus); E.W. Steyerberg (Ewout)
2015-01-01
textabstractBackground: Risk prediction models for prostate cancer (PCa) have become important tools in reducing unnecessary prostate biopsies. The Prostate Health Index (PHI) may increase the predictive accuracy of such models. Objectives: To compare two PCa risk calculators (RCs) that include PHI.
Comparison of risk calculation approaches in a screening programme for Down syndrome
Koster, Maria P. H.; Heetkamp, Kirsten M.; de Miranda, Esteriek; Schielen, Peter C. J. I.
2012-01-01
In the Netherlands, both the LifeCycle Elipse (LC) and the Astraia software package are used to calculate the risk of having a child with Down syndrome. Therefore, pregnant women can be presented with dissimilar risks. In this study the conformity between these risks before and after harmonization
Directory of Open Access Journals (Sweden)
Seied R Mahdavi
2012-01-01
Full Text Available Aims: The objective of this study is to evaluate the accuracy of a treatment planning system (TPS for calculating the dose distribution parameters in conformal fields (CF. Dosimetric parameters of CF′s were compared between measurement, Monte Carlo simulation (MCNP4C and TPS calculation. Materials and Methods: Field analyzer water phantom was used for obtaining percentage depth dose (PDD curves and beam profiles (BP of different conformal fields. MCNP4C was used to model conformal fields dose specification factors and head of linear accelerator varian model 2100C/D. Results: Results showed that the distance to agreement (DTA and dose difference (DD of our findings were well within the acceptance criteria of 3 mm and 3%, respectively. Conclusions: According to this study it can be revealed that TPS using equivalent tissue air ratio calculation method is still convenient for dose prediction in non small conformal fields normally used in prostate radiotherapy. It was also showed that, since there is a close correlation with Monte Carlo simulation, measurements and TPS, Monte Carlo can be further confirmed for implementation and calculation dose distribution in non standard and complex conformal irradiation field for treatment planning systems.
Several flux-calculation (FC) schemes are available for determining soil-to-atmosphere emissions of nitrous oxide (N2O) and other trace gases using data from non-steady-state flux chambers. Recently developed methods claim to provide more accuracy in estimating the true pre-deployment flux (f0) comp...
Energy Technology Data Exchange (ETDEWEB)
Lee, Yoon Hee; Cho, Nam Zin [KAERI, Daejeon (Korea, Republic of)
2016-05-15
The code gives inaccurate results of nuclides for evaluation of source term analysis, e.g., Sr- 90, Ba-137m, Cs-137, etc. A Krylov Subspace method was suggested by Yamamoto et al. The method is based on the projection of solution space of Bateman equation to a lower dimension of Krylov subspace. It showed good accuracy in the detailed burnup chain calculation if dimension of the Krylov subspace is high enough. In this paper, we will compare the two methods in terms of accuracy and computing time. In this paper, two-block decomposition (TBD) method and Chebyshev rational approximation method (CRAM) are compared in the depletion calculations. In the two-block decomposition method, according to the magnitude of effective decay constant, the system of Bateman equation is decomposed into short- and longlived blocks. The short-lived block is calculated by the general Bateman solution and the importance concept. Matrix exponential with smaller norm is used in the long-lived block. In the Chebyshev rational approximation, there is no decomposition of the Bateman equation system, and the accuracy of the calculation is determined by the order of expansion in the partial fraction decomposition of the rational form. The coefficients in the partial fraction decomposition are determined by a Remez-type algorithm.
Comparison of calculation methods. PME and LTA; Vergelijking methoden energiebesparing. PME en MJA
Energy Technology Data Exchange (ETDEWEB)
Verdonk, M. [Planbureau voor de Leefomgeving PBL, Bilthoven (Netherlands); Boonekamp, P. [ECN Beleidsstudies, Petten (Netherlands)
2009-11-15
Various calculation methods are used to calculate the effects of energy conservation policy. This memo provides insight in the main differences between two frequently used methods, i.e. the second Long Term Agreements Energy Efficiency (LTA2) and the Protocol Monitoring Energy Saving (PME). These insights enable better fine-tuning between PME and the successor to LTA2 (LTA3). The memo starts with a brief characterization of the LTAs, the corresponding calculation methods and PME approach. Next the most striking differences are discussed, including the quantitative effect of the two calculation methods. [Dutch] Er zijn verschillende rekenmethoden in gebruik, waarmee de effecten van het energiebesparingsbeleid kunnen worden berekend. Deze notitie verschaft inzicht in de voornaamste verschillen tussen twee veel gebruikte methoden, namelijk de tweede Meerjarenafspraken energie-efficiency (MJA2) en het Protocol Monitoring Energiebesparing (PME). De inzichten maken een betere afstemming mogelijk tussen PME en de opvolger van MJA2 (MJA3). De notitie start met een korte typering van de MJA's, de bijbehorende rekenmethode en de PME aanpak. Vervolgens worden de meest opvallende verschillen belicht, inclusief het kwantitatieve effect van de twee rekenmethoden.
International Nuclear Information System (INIS)
Lee, Yoon Hee; Cho, Nam Zin
2016-01-01
The code gives inaccurate results of nuclides for evaluation of source term analysis, e.g., Sr- 90, Ba-137m, Cs-137, etc. A Krylov Subspace method was suggested by Yamamoto et al. The method is based on the projection of solution space of Bateman equation to a lower dimension of Krylov subspace. It showed good accuracy in the detailed burnup chain calculation if dimension of the Krylov subspace is high enough. In this paper, we will compare the two methods in terms of accuracy and computing time. In this paper, two-block decomposition (TBD) method and Chebyshev rational approximation method (CRAM) are compared in the depletion calculations. In the two-block decomposition method, according to the magnitude of effective decay constant, the system of Bateman equation is decomposed into short- and longlived blocks. The short-lived block is calculated by the general Bateman solution and the importance concept. Matrix exponential with smaller norm is used in the long-lived block. In the Chebyshev rational approximation, there is no decomposition of the Bateman equation system, and the accuracy of the calculation is determined by the order of expansion in the partial fraction decomposition of the rational form. The coefficients in the partial fraction decomposition are determined by a Remez-type algorithm.
Comparison of three jet events to predictions from a next-to-leading order calculation
Energy Technology Data Exchange (ETDEWEB)
Brandl, Alexander [Univ. of New Mexico, Albuquerque, NM (United States)
2002-01-01
The properties of three-jet events in data of integrated luminosity 86±4 pb^{-1} from CDF Run 1b and with total transverse energy greater than 175 GeV have been analyzed and compared to predictions from a next-to-leading order perturbative QCD calculation.
Marchant, T. E.; Joshi, K. D.; Moore, C. J.
2018-03-01
Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).
Benchmarking Cloud Resources for HEP
Alef, M.; Cordeiro, C.; De Salvo, A.; Di Girolamo, A.; Field, L.; Giordano, D.; Guerri, M.; Schiavi, F. C.; Wiebalck, A.
2017-10-01
In a commercial cloud environment, exhaustive resource profiling is beneficial to cope with the intrinsic variability of the virtualised environment, allowing to promptly identify performance degradation. In the context of its commercial cloud initiatives, CERN has acquired extensive experience in benchmarking commercial cloud resources. Ultimately, this activity provides information on the actual delivered performance of invoiced resources. In this report we discuss the experience acquired and the results collected using several fast benchmark applications adopted by the HEP community. These benchmarks span from open-source benchmarks to specific user applications and synthetic benchmarks. The workflow put in place to collect and analyse performance metrics is also described.
Garg, Naveen; Muduli, Subrat K; Kapoor, Aditya; Tewari, Satyendra; Kumar, Sudeep; Khanna, Roopali; Goel, Pravin Kumar
The accuracy of various 10-year cardiovascular disease (CVD) risk calculators in Indians may not be the same as in other populations. Present study was conducted to compare the various calculators for CVD risk assessment and statin eligibility according to different guidelines. Consecutive 1110 patients who presented after their first myocardial infarction were included. Their CVD risk was calculated using Framingham Risk score- Coronary heart disease (FRS-CHD), Framingham Risk Score- Cardiovascular Disease (FRS-CVD), QRISK2, Joint British Society risk calculator 3 (JBS3), American College of Cardiology/American Heart Association (ACC/AHA), atherosclerotic cardiovascular disease (ASCVD) and WHO risk charts, assuming that they had presented one day before cardiac event for risk assessment. Eligibility for statin uses was also looked into using ACC/AHA, NICE and Canadian guidelines. FRS-CVD risk assessment model has performed the best as it could identify the highest number of patients (51.9%) to be at high CVD risk while WHO and ASCVD calculators have performed the worst (only 16.2% and 28.3% patients respectively were stratified into high CVD risk) considering 20% as cut off for high risk definition. QRISK2, JBS3 and FRS-CHD have performed intermediately. Using NICE, ACC/AHA and Canadian guidelines; 76%, 69% and 44.6% patients respectively were found to be eligible for statin use. FRS-CVD appears to be the most useful for CVD risk assessment in Indians, but the difference may be because FRS-CVD estimates risk for several additional outcomes as compared with other risk scores. For statin eligibility, however, NICE guideline use is the most appropriate. Copyright © 2017 Cardiological Society of India. Published by Elsevier B.V. All rights reserved.
The comparison of MCNP perturbation technique with MCNP difference method in critical calculation
International Nuclear Information System (INIS)
Liu Bin; Lv Xuefeng; Zhao Wei; Wang Kai; Tu Jing; Ouyang Xiaoping
2010-01-01
For a nuclear fission system, we calculated Δk eff , which arise from system material composition changes, by two different approaches, the MCNP perturbation technique and the MCNP difference method. For every material composition change, we made four different runs, each run with different cycles or each cycle generating different neutrons, then we compared the two Δk eff that are obtained by two different approaches. As a material composition change in any particular cell of the nuclear fission system is small compared to the material compositions in the whole nuclear fission system, in other words, this composition change can be treated as a small perturbation, the Δk eff results obtained from the MCNP perturbation technique are much quicker, much more efficient and reliable than the results from the MCNP difference method. When a material composition change in any particular cell of the nuclear fission system is significant compared to the material compositions in the whole nuclear fission system, both the MCNP perturbation technique and the MCNP difference method can give satisfactory results. But for the run with the same cycles and each cycle generating the same neutrons, the results obtained from the MCNP perturbation technique are systemically less than the results obtained from the MCNP difference method. To further confirm our calculation results from the MCNP4C, we run the exact same MCNP4C input file in MCNP5, the calculation results from MCNP5 are the same as the calculation results from MCNP4C. We need caution when using the MCNP perturbation technique to calculate the Δk eff as the material composition change is large compared to the material compositions in the whole nuclear fission system, even though the material composition changes of any particular cell of the fission system still meet the criteria of MCNP perturbation technique.
Energy Technology Data Exchange (ETDEWEB)
New, Joshua Ryan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Levinson, Ronnen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Huang, Yu [White Box Technologies, Salt Lake City, UT (United States); Sanyal, Jibonananda [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Miller, William A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Mellot, Joe [The Garland Company, Cleveland, OH (United States); Childs, Kenneth W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kriner, Scott [Green Metal Consulting, Inc., Macungie, PA (United States)
2014-06-01
The Roof Savings Calculator (RSC) was developed through collaborations among Oak Ridge National Laboratory (ORNL), White Box Technologies, Lawrence Berkeley National Laboratory (LBNL), and the Environmental Protection Agency in the context of a California Energy Commission Public Interest Energy Research project to make cool-color roofing materials a market reality. The RSC website and a simulation engine validated against demonstration homes were developed to replace the liberal DOE Cool Roof Calculator and the conservative EPA Energy Star Roofing Calculator, which reported different roof savings estimates. A preliminary analysis arrived at a tentative explanation for why RSC results differed from previous LBNL studies and provided guidance for future analysis in the comparison of four simulation programs (doe2attic, DOE-2.1E, EnergyPlus, and MicroPas), including heat exchange between the attic surfaces (principally the roof and ceiling) and the resulting heat flows through the ceiling to the building below. The results were consolidated in an ORNL technical report, ORNL/TM-2013/501. This report is an in-depth inter-comparison of four programs with detailed measured data from an experimental facility operated by ORNL in South Carolina in which different segments of the attic had different roof and attic systems.
Comparison of Monte Carlo method and deterministic method for neutron transport calculation
International Nuclear Information System (INIS)
Mori, Takamasa; Nakagawa, Masayuki
1987-01-01
The report outlines major features of the Monte Carlo method by citing various applications of the method and techniques used for Monte Carlo codes. Major areas of its application include analysis of measurements on fast critical assemblies, nuclear fusion reactor neutronics analysis, criticality safety analysis, evaluation by VIM code, and calculation for shielding. Major techniques used for Monte Carlo codes include the random walk method, geometric expression method (combinatorial geometry, 1, 2, 4-th degree surface and lattice geometry), nuclear data expression, evaluation method (track length, collision, analog (absorption), surface crossing, point), and dispersion reduction (Russian roulette, splitting, exponential transform, importance sampling, corrected sampling). Major features of the Monte Carlo method are as follows: 1) neutron source distribution and systems of complex geometry can be simulated accurately, 2) physical quantities such as neutron flux in a place, on a surface or at a point can be evaluated, and 3) calculation requires less time. (Nogami, K.)
[Calculating Pearson residual in logistic regressions: a comparison between SPSS and SAS].
Xu, Hao; Zhang, Tao; Li, Xiao-song; Liu, Yuan-yuan
2015-01-01
To compare the results of Pearson residual calculations in logistic regression models using SPSS and SAS. We reviewed Pearson residual calculation methods, and used two sets of data to test logistic models constructed by SPSS and STATA. One model contained a small number of covariates compared to the number of observed. The other contained a similar number of covariates as the number of observed. The two software packages produced similar Pearson residual estimates when the models contained a similar number of covariates as the number of observed, but the results differed when the number of observed was much greater than the number of covariates. The two software packages produce different results of Pearson residuals, especially when the models contain a small number of covariates. Further studies are warranted.
Comparison between Two Methods to Calculate the Transition Matrix of Orbit Motion
Directory of Open Access Journals (Sweden)
Ana Paula Marins Chiaradia
2012-01-01
Full Text Available Two methods to evaluate the state transition matrix are implemented and analyzed to verify the computational cost and the accuracy of both methods. This evaluation represents one of the highest computational costs on the artificial satellite orbit determination task. The first method is an approximation of the Keplerian motion, providing an analytical solution which is then calculated numerically by solving Kepler's equation. The second one is a local numerical approximation that includes the effect of 2. The analysis is performed comparing these two methods with a reference generated by a numerical integrator. For small intervals of time (1 to 10 s and when one needs more accuracy, it is recommended to use the second method, since the CPU time does not excessively overload the computer during the orbit determination procedure. For larger intervals of time and when one expects more stability on the calculation, it is recommended to use the first method.
Anderson, H Glenn; Frazier, Lisa; Anderson, Stephanie L; Stanton, Robert; Gillette, Chris; Broedel-Zaugg, Kim; Yingling, Kevin
2017-05-01
Objective. To compare learning outcomes achieved from a pharmaceutical calculations course taught in a traditional lecture (lecture model) and a flipped classroom (flipped model). Methods. Students were randomly assigned to the lecture model and the flipped model. Course instructors, content, assessments, and instructional time for both models were equivalent. Overall group performance and pass rates on a standardized assessment (Pcalc OSCE) were compared at six weeks and at six months post-course completion. Results. Student mean exam scores in the flipped model were higher than those in the lecture model at six weeks and six months later. Significantly more students passed the OSCE the first time in the flipped model at six weeks; however, this effect was not maintained at six months. Conclusion. Within a 6 week course of study, use of a flipped classroom improves student pharmacy calculation skill achievement relative to a traditional lecture andragogy. Further study is needed to determine if the effect is maintained over time.
Gordon, Oliver; Goverde, Marcel; Pazdan, James; Staerk, Alexandra; Roesti, David
2015-01-01
In the present work we compared different calculation approaches for their ability to accurately define microbiological control levels based on historical data. To that end, real microbiological data were used for simulation experiments. The results of our study confirmed that assuming a normal distribution is not appropriate for that purpose. In addition, assumption of a Poisson distribution generally underestimated the control level, and the predictive power for future values was highly insufficient. The non-parametric Excel percentile strongly predicted future values in our simulation experiments (although not as good as some of the parametric models). With the limited amount of data used in the simulations, the calculated control levels for the upper percentiles were on average higher and more variable compared to the parametric models. This was due to the fact that the largest observed value was generally defined as the control level. Accordingly, the Excel percentile is less robust towards outliers and requires more data to accurately define control levels as compared to parametric models. The negative binomial as well as the zero-inflated negative binomial distribution, both parametric models, had good predictive power for future values. Nonetheless, on basis of our simulation experiments, we saw no evidence to generally prefer the zero-inflated model over the non-inflated one. Finally, with our data, the gamma distribution on average had at least as good predictive power as the negative binomial distribution and zero-inflated negative binomial distribution for percentiles ≥98%, indicating that it may represent a viable option for calculating microbiological control levels at high percentiles. Presumably, this was based on the fact that the gamma distribution fitted the upper end of the distribution better than other models. Since in general microbiological control levels would be based on the upper percentiles, microbiologists may exclusively rely on the
International Nuclear Information System (INIS)
Sekimoto, Hiroshi
1989-01-01
The tritium production density, kerma heat production density, dose and certain integral values of scalar neutron spectra in bare and graphite-reflected lithium-fluoride piles irradiated with D-T neutrons were evaluated from the pulse height distribution of a miniature NE213 neutron spectrometer with UFO data processing code, and compared with the values calculated with MORSE-CV Monte Carlo code. (author). 8 refs.; 1 fig.; 2 tabs
A comparison of different parametrized pocket formulae to calculate fusion barriers
Dutt, Ishwar
2013-04-01
Using twelve different parametrized pocket formulae fusion barrier heights and positions are calculated and confronted with available experimental data. All different parametrized pocket formulae are able to reproduce fusion barrier heights and positions within ±10% and ±20%, respectively. Our detailed study investigate that the parametrized pocket formulae based on models such as Bass 1980, Winther 1995, Prox 1988, and Royer 2001 reproduce the barrier heights nicely as compared to other formulae available in the literature.
Comparison of CT number calibration techniques for CBCT-based dose calculation
International Nuclear Information System (INIS)
Dunlop, Alex; McQuaid, Dualta; Nill, Simeon; Hansen, Vibeke N.; Oelfke, Uwe; Murray, Julia; Bhide, Shreerang; Harrington, Kevin; Poludniowski, Gavin; Nutting, Christopher; Newbold, Kate
2015-01-01
The aim of this work was to compare and validate various computed tomography (CT) number calibration techniques with respect to cone beam CT (CBCT) dose calculation accuracy. CBCT dose calculation accuracy was assessed for pelvic, lung, and head and neck (H and N) treatment sites for two approaches: (1) physics-based scatter correction methods (CBCT r ); (2) density override approaches including assigning water density to the entire CBCT (W), assignment of either water or bone density (WB), and assignment of either water or lung density (WL). Methods for CBCT density assignment within a commercially available treatment planning system (RS auto ), where CBCT voxels are binned into six density levels, were assessed and validated. Dose-difference maps and dose-volume statistics were used to compare the CBCT dose distributions with the ground truth of a planning CT acquired the same day as the CBCT. For pelvic cases, all CTN calibration methods resulted in average dose-volume deviations below 1.5 %. RS auto provided larger than average errors for pelvic treatments for patients with large amounts of adipose tissue. For H and N cases, all CTN calibration methods resulted in average dose-volume differences below 1.0 % with CBCT r (0.5 %) and RS auto (0.6 %) performing best. For lung cases, WL and RS auto methods generated dose distributions most similar to the ground truth. The RS auto density override approach is an attractive option for CTN adjustments for a variety of anatomical sites. RS auto methods were validated, resulting in dose calculations that were consistent with those calculated on diagnostic-quality CT images, for CBCT images acquired of the lung, for patients receiving pelvic RT in cases without excess adipose tissue, and for H and N cases. (orig.) [de
Bécares, V.; Pérez Martín, S.; Vázquez Antolín, Miriam; Villamarín, D.; Martín Fuertes, Francisco; González Romero, E.M.; Merino Rodríguez, Iván
2014-01-01
The calculation of the effective delayed neutron fraction, beff , with Monte Carlo codes is a complex task due to the requirement of properly considering the adjoint weighting of delayed neutrons. Nevertheless, several techniques have been proposed to circumvent this difficulty and obtain accurate Monte Carlo results for beff without the need of explicitly determining the adjoint flux. In this paper, we make a review of some of these techniques; namely we have analyzed two variants of what we...
Energy Technology Data Exchange (ETDEWEB)
Slater, CO
2005-09-08
To aid in the determinations of reactivity worths for target materials in a proposed High Flux Isotope Reactor (HFIR) target configuration containing two additional hydraulic tubes, the worths of cadmium rabbits within the current hydraulic tube were calculated using a reference model of the HFIR and the MCNP5 computer code. The worths were compared to measured worths for both static and ejection experiments. After accounting for uncertainties in the calculations and the measurements, excellent agreement between the two was obtained. Computational and measurement limitations indicate that accurate estimation of worth is only possible when the worth exceeds 10 cents. Results indicate that MCNP5 and the reactor model can be used to predict reactivity worths of various samples when the expected perturbations are greater than 10 cents. The level of agreement between calculation and experiment indicates that the accuracy of such predictions would be dependent solely on the quality of the nuclear data for the materials to be irradiated. Transients that are approximated by ''piecewise static'' computational models should likewise have an accuracy that is dependent solely on the quality of the nuclear data.
Righter, K.; Ghiorso, M.
2009-01-01
Calculation of oxygen fugacity in high pressure and temperature experiments in metal-silicate systems is usually approximated by the ratio of Fe in the metal and FeO in the silicate melt: (Delta)IW=2*log(X(sub Fe)/X(sub FeO)), where IW is the iron-wustite reference oxygen buffer. Although this is a quick and easy calculation to make, it has been applied to a huge variety of metallic (Fe- Ni-S-C-O-Si systems) and silicate liquids (SiO2, Al2O3, TiO2, FeO, MgO, CaO, Na2O, K2O systems). This approach has surely led to values that have little meaning, yet are applied with great confidence, for example, to a terrestrial mantle at "IW-2". Although fO2 can be circumvented in some cases by consideration of Fe-M distribution coefficient, these do not eliminate the effects of alloy or silicate liquid compositional variation, or the specific chemical effects of S in the silicate liquid, for example. In order to address the issue of what the actual value of fO2 is in any given experiment, we have calculated fO2 from the equilibria 2Fe (metal) + SiO2 (liq) + O2 = Fe2SiO4 (liq).
Comparison of Mixing Calculations for Reacting and Non-Reacting Flows in a Cylindrical Duct
Oechsle, V. L.; Mongia, H. C.; Holdeman, J. D.
1994-01-01
A production 3-D elliptic flow code has been used to calculate non-reacting and reacting flow fields in an experimental mixing section relevant to a rich burn/quick mix/lean burn (RQL) combustion system. A number of test cases have been run to assess the effects of the variation in the number of orifices, mass flow ratio, and rich-zone equivalence ratio on the flow field and mixing rates. The calculated normalized temperature profiles for the non-reacting flow field agree qualitatively well with the normalized conserved variable isopleths for the reacting flow field indicating that non-reacting mixing experiments are appropriate for screening and ranking potential rapid mixing concepts. For a given set of jet momentum-flux ratio, mass flow ratio, and density ratio (J, MR, and DR), the reacting flow calculations show a reduced level of mixing compared to the non-reacting cases. In addition, the rich-zone equivalence ratio has noticeable effect on the mixing flow characteristics for reacting flows.
Coakley, T. J.; Hsieh, T.
1985-01-01
Numerical simulation of steady and unsteady transonic diffuser flows using two different computer codes are discussed and compared with experimental data. The codes solve the Reynolds-averaged, compressible, Navier-Stokes equations using various turbulence models. One of the codes has been applied extensively to diffuser flows and uses the hybrid method of MacCormack. This code is relatively inefficient numerically. The second code, which was developed more recently, is fully implicit and is relatively efficient numerically. Simulations of steady flows using the implicit code are shown to be in good agreement with simulations using the hybrid code. Both simulations are in good agreement with experimental results. Simulations of unsteady flows using the two codes are in good qualitative agreement with each other, although the quantitative agreement is not as good as in the steady flow cases. The implicit code is shown to be eight times faster than the hybrid code for unsteady flow calculations and up to 32 times faster for steady flow calculations. Results of calculations using alternative turbulence models are also discussed.
Comparison of lysimeter based and calculated ASCE reference evapotranspiration in a subhumid climate
Nolz, Reinhard; Cepuder, Peter; Eitzinger, Josef
2016-04-01
The standardized form of the well-known FAO Penman-Monteith equation, published by the Environmental and Water Resources Institute of the American Society of Civil Engineers (ASCE-EWRI), is recommended as a standard procedure for calculating reference evapotranspiration (ET ref) and subsequently plant water requirements. Applied and validated under different climatic conditions it generally achieved good results compared to other methods. However, several studies documented deviations between measured and calculated reference evapotranspiration depending on environmental and weather conditions. Therefore, it seems generally advisable to evaluate the model under local environmental conditions. In this study, reference evapotranspiration was determined at a subhumid site in northeastern Austria from 2005 to 2010 using a large weighing lysimeter (ET lys). The measured data were compared with ET ref calculations. Daily values differed slightly during a year, at which ET ref was generally overestimated at small values, whereas it was rather underestimated when ET was large, which is supported also by other studies. In our case, advection of sensible heat proved to have an impact, but it could not explain the differences exclusively. Obviously, there were also other influences, such as seasonal varying surface resistance or albedo. Generally, the ASCE-EWRI equation for daily time steps performed best at average weather conditions. The outcomes should help to correctly interpret ET ref data in the region and in similar environments and improve knowledge on the dynamics of influencing factors causing deviations.
A comparison of shielding calculation methods for multi-slice computed tomography (CT) systems
International Nuclear Information System (INIS)
Cole, J A; Platten, D J
2008-01-01
Currently in the UK, shielding calculations for computed tomography (CT) systems are based on the BIR-IPEM (British Institute of Radiology and Institute of Physics in Engineering in Medicine) working group publication from 2000. Concerns have been raised internationally regarding the accuracy of the dose plots on which this method depends and the effect that new scanner technologies may have. Additionally, more recent shielding methods have been proposed by the NCRP (National Council on Radiation Protection) from the USA. Thermoluminescent detectors (TLDs) were placed in three CT scanner rooms at different positions for several weeks before being processed. Patient workload and dose data (DLP: the dose length product and mAs: the tube current-time product) were collected for this period. Individual dose data were available for more than 95% of patients scanned and the remainder were estimated. The patient workload data were used to calculate expected scatter radiation for each TLD location by both the NCRP and BIR-IPEM methods. The results were then compared to the measured scattered radiation. Calculated scattered air kerma and the minimum required lead shielding were found to be frequently overestimated compared to the measured air kerma, on average almost five times the measured scattered air kerma.
Ananthakrishna, G.; K, Srikanth
2018-03-01
It is well known that plastic deformation is a highly nonlinear dissipative irreversible phenomenon of considerable complexity. As a consequence, little progress has been made in modeling some well-known size-dependent properties of plastic deformation, for instance, calculating hardness as a function of indentation depth independently. Here, we devise a method of calculating hardness by calculating the residual indentation depth and then calculate the hardness as the ratio of the load to the residual imprint area. Recognizing the fact that dislocations are the basic defects controlling the plastic component of the indentation depth, we set up a system of coupled nonlinear time evolution equations for the mobile, forest, and geometrically necessary dislocation densities. Within our approach, we consider the geometrically necessary dislocations to be immobile since they contribute to additional hardness. The model includes dislocation multiplication, storage, and recovery mechanisms. The growth of the geometrically necessary dislocation density is controlled by the number of loops that can be activated under the contact area and the mean strain gradient. The equations are then coupled to the load rate equation. Our approach has the ability to adopt experimental parameters such as the indentation rates, the geometrical parameters defining the Berkovich indenter, including the nominal tip radius. The residual indentation depth is obtained by integrating the Orowan expression for the plastic strain rate, which is then used to calculate the hardness. Consistent with the experimental observations, the increasing hardness with decreasing indentation depth in our model arises from limited dislocation sources at small indentation depths and therefore avoids divergence in the limit of small depths reported in the Nix-Gao model. We demonstrate that for a range of parameter values that physically represent different materials, the model predicts the three characteristic
Energy Technology Data Exchange (ETDEWEB)
Harding, R., E-mail: ruth.harding2@wales.nhs.uk [St James’s Institute of Oncology, Medical Physics and Engineering, Leeds LS9 7TF, United Kingdomand Abertawe Bro Morgannwg University Health Board, Medical Physics and Clinical Engineering, Swansea SA2 8QA (United Kingdom); Trnková, P.; Lomax, A. J. [Paul Scherrer Institute, Centre for Proton Therapy, Villigen 5232 (Switzerland); Weston, S. J.; Lilley, J.; Thompson, C. M.; Cosgrove, V. P. [St James’s Institute of Oncology, Medical Physics and Engineering, Leeds LS9 7TF (United Kingdom); Short, S. C. [Leeds Institute of Molecular Medicine, Oncology and Clinical Research, Leeds LS9 7TF, United Kingdomand St James’s Institute of Oncology, Oncology, Leeds LS9 7TF (United Kingdom); Loughrey, C. [St James’s Institute of Oncology, Oncology, Leeds LS9 7TF (United Kingdom); Thwaites, D. I. [St James’s Institute of Oncology, Medical Physics and Engineering, Leeds LS9 7TF, United Kingdomand Institute of Medical Physics, School of Physics, University of Sydney, Sydney NSW 2006 (Australia)
2014-11-01
Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was to benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.
International Nuclear Information System (INIS)
Ollila, K.
1995-11-01
The solubility behaviour of unirradiated UO 2 pellets was studied under oxic (air-saturated) and anoxic (N 2 ) conditions in deionized water, in sodium bicarbonate solutions with varying bicarbonate content (60 - 600 ppm), in Allard groundwater simulating granitic fresh groundwater conditions, and in bentonite water simulating the effects of bentonite on granitic fresh groundwater (25 deg C). The release of uranium was measured during static batch dissolution experiments of long duration (2-6 years). A comparison was made with the theoretical solubility data calculated with the geochemical code EQ3/6 in order to evaluate solubility (steady state) limiting factors. (orig.) (26 refs., 32 figs., 13 tabs.)
DEFF Research Database (Denmark)
Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk
2007-01-01
with modified Batho correction; Nucletron Helax-TMS: the collapsed cone and the pencil beam with equivalent path length correction; CMS XiO: the multigrid superposition and the fast Fourier transform convolution; Philips Pinnacle: the collapsed cone. Monte Carlo simulations (MC) performed with the EGSnrc codes...... BEAMnrc and DOSxyznrc from NRCC in Ottawa were used as a benchmark. The study was carried out in simple geometrical water phantoms (rho = 1.00 g cm(-3)) with inserts of different densities simulating light lung tissue (rho = 0.035 g cm(-3)), normal lung (rho = 0.20 g cm(-3)) and cortical bone tissue (rho...
A comparison of measurements and calculations for the Stripa tracer experiments
International Nuclear Information System (INIS)
Hodgkinson, D.P.; Copper, N.S.
1992-03-01
This paper presents a comparison of measurements and predictions for migration of tracers from boreholes to the validation drift and to other boreholes in the Site Characterisation and Validation (SCV) block. The comparison was carried out on behalf of the Stripa task force on fracture flow modelling. The paper summarises the radar/saline tracer experiments, the tracer migration experiment observations and reviews the fracture flow and tracer transport modelling approaches and predictions made by AEA Technology, Fracflow Consultants, Golder Associates and Lawrence Berkeley Laboratory. The predictions are compared with the observed breakthrough curves on the basis of the validation process and criteria defined by the task force. The results of all four modelling groups met the validation criteria, with the predictions of the tracer breakthrough concentrations and times being within an order of magnitude of the observations. Also the AEA and Golder approaches allow the spatial distribution of tracer breakthrough into the validation drift to be predicted and these predictions also showed reasonable accuracy. The successful completion of this project demonstrates the feasibility of discrete fracture flow and tracer transport modelling. (36 refs.) (au)
Directory of Open Access Journals (Sweden)
Labarile Antonella
2016-01-01
Full Text Available In nuclear safety research, the quality of the results of simulation codes is widely determined by the reactor design and safe operation, and the description of neutron transport in the reactor core is a feature of particular importance. Moreover, for the long effort that is made, there remain uncertainties in simulation results due to the neutronic data and input specification that need a huge effort to be eliminated. A realistic estimation of these uncertainties is required for finding out the reliability of the results. This explains the increasing demand in recent years for calculations in the nuclear fields with best-estimate codes that proved confidence bounds of simulation results. All this has lead to the Benchmark for Uncertainty Analysis in Modelling (UAM for Design, Operation and Safety Analysis of LWRs of the NEA. The UAM-Benchmark coupling multi-physics and multi-scale analysis using as a basis complete sets of input specifications of boiling water reactors (BWR and pressurized water reactors (PWR. In this study, the results of the transport calculations carried out using the SCALE-6.2 program (TRITON/NEWT and TRITON/KENO modules as well as Monte Carlo SERPENT code, are presented. Additionally, they have been made uncertainties calculation for a PWR 15 × 15 and a BWR 7 × 7 fuel elements, in two different configurations (with and without control rod, and two different states, Hot Full Power (HFP and Hot Zero Power (HZP, using the TSUNAMI module, which uses the Generalized Perturbation Theory (GPT, and SAMPLER, which uses stochastic sampling techniques for cross-sections perturbations. The results obtained and validated are compared with references results and similar studies presented in the exercise I-2 (Lattice Physics of UAM-Benchmark.
International Nuclear Information System (INIS)
Orient, O.J.; Chutjian, A.
1986-01-01
Electron-attachment cross sections are calculated for the process e - +SF 6 →SF 6 - in the energy range 1--200 MeV. An electron scattering approximation is used in which diatomic-like potential energy curves near the equilibrium SF 6 ground state are constructed from recent spectroscopic data. Excellent agreement is found over the entire energy range with experimental attachment cross sections at a temperature of 300 0 K for s-wave (l = 0) scattering. The same calculation, with appropriate adjustment of the thermal populations, is used to calculate attachment rate constants k(epsilon-bar) in the range 50--600 0 K for both s- and p-wave scattering. Comparisons are made with four independent sets of measured rate constants in the range 200--600 0 K, after adding an experimental estimate of the attachment rate for the process e - +SF 6 →SF 5 - +F. Good agreement is found with three sets of data, and poorer agreement with the fourth. The calculation shows that the true s-wave behavior of the rate constant, in which k(epsilon-bar) is independent of epsilon-bar, will be realized only at temperatures T less than 115 K, and that the observed constancy in the range 300--600 0 K is accidental. Theory predicts approximately a 9% increase in k(epsilon-bar) as T is lowered from 300 to 200 0 K
Comparison of three methods of calculating strain in the mouse ulna in exogenous loading studies.
Norman, Stephanie C; Wagner, David W; Beaupre, Gary S; Castillo, Alesha B
2015-01-02
Axial compression of mouse limbs is commonly used to induce bone formation in a controlled, non-invasive manner. Determination of peak strains caused by loading is central to interpreting results. Load-strain calibration is typically performed using uniaxial strain gauges attached to the diaphyseal, periosteal surface of a small number of sacrificed animals. Strain is measured as the limb is loaded to a range of physiological loads known to be anabolic to bone. The load-strain relationship determined by this subgroup is then extrapolated to a larger group of experimental mice. This method of strain calculation requires the challenging process of strain gauging very small bones which is subject to variability in placement of the strain gauge. We previously developed a method to estimate animal-specific periosteal strain during axial ulnar loading using an image-based computational approach that does not require strain gauges. The purpose of this study was to compare the relationship between load-induced bone formation rates and periosteal strain at ulnar midshaft using three different methods to estimate strain: (A) Nominal strain values based solely on load-strain calibration; (B) Strains calculated from load-strain calibration, but scaled for differences in mid-shaft cross-sectional geometry among animals; and (C) An alternative image-based computational method for calculating strains based on beam theory and animal-specific bone geometry. Our results show that the alternative method (C) provides comparable correlation between strain and bone formation rates in the mouse ulna relative to the strain gauge-dependent methods (A and B), while avoiding the need to use strain gauges. Published by Elsevier Ltd.
Franco-Villoria, Maria; Wright, Charlotte M; McColl, John H; Sherriff, Andrea; Pearce, Mark S
2016-01-07
To explore the usefulness of Bioelectrical Impedance Analysis (BIA) for general use by identifying best-evidenced formulae to calculate lean and fat mass, comparing these to historical gold standard data and comparing these results with machine-generated output. In addition, we explored how to best to adjust lean and fat estimates for height and how these overlapped with body mass index (BMI). Cross-sectional observational study within population representative cohort study. Urban community, North East England Sample of 506 mothers of children aged 7-8 years, mean age 36.3 years. Participants were measured at a home visit using a portable height measure and leg-to-leg BIA machine (Tanita TBF-300MA). Height, weight, bioelectrical impedance (BIA). Lean and fat mass calculated using best-evidenced published formulae as well as machine-calculated lean and fat mass data. Estimates of lean mass were similar to historical results using gold standard methods. When compared with the machine-generated values, there were wide limits of agreement for fat mass and a large relative bias for lean that varied with size. Lean and fat residuals adjusted for height differed little from indices of lean (or fat)/height(2). Of 112 women with BMI >30 kg/m(2), 100 (91%) also had high fat, but of the 16 with low BMI (Bioelectrical impedance can supply a robust and useful field measure of body composition, so long as the machine-generated output is not used. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Directory of Open Access Journals (Sweden)
Dear Keith BG
2006-09-01
Full Text Available Abstract Background To explain the possible effects of exposure to weather conditions on population health outcomes, weather data need to be calculated at a level in space and time that is appropriate for the health data. There are various ways of estimating exposure values from raw data collected at weather stations but the rationale for using one technique rather than another; the significance of the difference in the values obtained; and the effect these have on a research question are factors often not explicitly considered. In this study we compare different techniques for allocating weather data observations to small geographical areas and different options for weighting averages of these observations when calculating estimates of daily precipitation and temperature for Australian Postal Areas. Options that weight observations based on distance from population centroids and population size are more computationally intensive but give estimates that conceptually are more closely related to the experience of the population. Results Options based on values derived from sites internal to postal areas, or from nearest neighbour sites – that is, using proximity polygons around weather stations intersected with postal areas – tended to include fewer stations' observations in their estimates, and missing values were common. Options based on observations from stations within 50 kilometres radius of centroids and weighting of data by distance from centroids gave more complete estimates. Using the geographic centroid of the postal area gave estimates that differed slightly from the population weighted centroids and the population weighted average of sub-unit estimates. Conclusion To calculate daily weather exposure values for analysis of health outcome data for small areas, the use of data from weather stations internal to the area only, or from neighbouring weather stations (allocated by the use of proximity polygons, is too limited. The most
The comparison of calculated and experimental microdosimetric distributions for carbon ions
Czech Academy of Sciences Publication Activity Database
Palajová, Zdenka; Spurný, František; Davídková, Marie
2007-01-01
Roč. 122, 1-4 (2007), s. 491-493 ISSN 0144-8420. [Symposium on Microdosimetry /14./. Venezia, 13.11.2005-18.11.2005] R&D Projects: GA ČR GA202/04/0795; GA ČR(CZ) GD202/05/H031 Institutional research plan: CEZ:AV0Z10480505 Keywords : microdosimetry distributions * carbon ions * theoretical calculation Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 0.528, year: 2007
Calculation of Reaction Free Energies in Solution: A Comparison of Current Approaches.
Besora, Maria; Vidossich, Pietro; Lledós, Agustí; Ujaque, Gregori; Maseras, Feliu
2018-02-08
The result of the application of different approaches based on the ideal gas/rigid rotor/harmonic oscillator (IGRRHO) model, commonly used in popular software packages, for the calculation of free energies in solution is compared with that of ab initio molecular dynamics for a process involving ligand exchange in palladium complexes. The IGRRHO-based approaches considered differ in most cases in the extent to which the rotational and translational contributions are included in the correction. Our study supports the use the free energy values directly obtained from dispersion-corrected DFT functionals without any correction or with minor corrections at most.
Comparison of Two Methods for Speeding Up Flash Calculations in Compositional Simulations
DEFF Research Database (Denmark)
Belkadi, Abdelkrim; Yan, Wei; Michelsen, Michael Locht
2011-01-01
Flash calculation is the most time consuming part in compositional reservoir simulations and several approaches have been proposed to speed it up. Two recent approaches proposed in the literature are the shadow region method and the Compositional Space Adaptive Tabulation (CSAT) method. The shadow...... and the tolerance set for accepting the feed composition are the key parameters in this method since they will influence the simulation speed and the accuracy of simulation results. Inspired by CSAT, we proposed a Tieline Distance Based Approximation (TDBA) method to get approximate flash results in the twophase...
Voit, Florian; Schäfer, Jan; Kienle, Alwin
2009-09-01
We present a methodology to compare results of classical radiative transfer theory against exact solutions of Maxwell theory for a high number of spheres. We calculated light propagation in a cubic scattering region (20 x 20 x 20 microm(3)) consisting of different concentrations of polystyrene spheres in water (diameter 2 microm) by an analytical solution of Maxwell theory and by a numerical solution of radiative transfer theory. The relative deviation of differential as well as total scattering cross sections obtained by both approaches was evaluated for each sphere concentration. For the considered case, we found that deviations due to radiative transfer theory remain small, even for concentrations up to ca. 20 vol. %.
International Nuclear Information System (INIS)
Rudstam, G.
1979-01-01
The fission product library FPLIB has been used for a calculation of the decay heat effect in nuclear fuel. The results are compared with integral determinations and with results obtained using the ENDF/BIV data base. In the case of the beta part, and also for the total decay heat, the FPLIB-data seem to be superior to the ENDF/BIV-data. The experimental integral data are in many cases reproduced within the combined limits of error of the methods. (author)
Benchmarking Cloud Storage Systems
Wang, Xing
2014-01-01
With the rise of cloud computing, many cloud storage systems like Dropbox, Google Drive and Mega have been built to provide decentralized and reliable file storage. It is thus of prime importance to know their features, performance, and the best way to make use of them. In this context, we introduce BenchCloud, a tool designed as part of this thesis to conveniently and efficiently benchmark any cloud storage system. First, we provide a study of six commonly-used cloud storage systems to ident...
Benchmarking Danish Industries
DEFF Research Database (Denmark)
Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette
2003-01-01
compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless......This report is based on the survey "Industrial Companies in Denmark - Today and Tomorrow',section IV: Supply Chain Management - Practices and Performance, question number 4.9 onperformance assessment. To our knowledge, this survey is unique, as we have not been able to findresults from any...
Issues in benchmarking human reliability analysis methods: A literature review
International Nuclear Information System (INIS)
Boring, Ronald L.; Hendrickson, Stacey M.L.; Forester, John A.; Tran, Tuan Q.; Lois, Erasmia
2010-01-01
There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies is presented in order to aid in the design of future HRA benchmarking endeavors.
Issues in benchmarking human reliability analysis methods : a literature review.
Energy Technology Data Exchange (ETDEWEB)
Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)
2008-04-01
There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.
Evaluation of an international benchmarking initiative in nine eye hospitals.
de Korne, Dirk F; Sol, Kees J C A; van Wijngaarden, Jeroen D H; van Vliet, Ellen J; Custers, Thomas; Cubbon, Mark; Spileers, Werner; Ygge, Jan; Ang, Chong-Lye; Klazinga, Niek S
2010-01-01
Benchmarking has become very popular among managers to improve quality in the private and public sector, but little is known about its applicability in international hospital settings. The purpose of this study was to evaluate the applicability of an international benchmarking initiative in eye hospitals. To assess the applicability, an evaluation frame was constructed on the basis of a systematic literature review. The frame was applied longitudinally to a case study of nine eye hospitals that used a set of performance indicators for benchmarking. Document analysis, nine questionnaires, and 26 semistructured interviews with stakeholders in each hospital were used for qualitative analysis. The evaluation frame consisted of four areas with key conditions for benchmarking: purposes of benchmarking, performance indicators, participating organizations, and performance management systems. This study showed that the international benchmarking between eye hospitals scarcely met these conditions. The used indicators were not incorporated in a performance management system in any of the hospitals. Despite the apparent homogeneity of the participants and the absence of competition, differences in ownership, governance structure, reimbursement, and market orientation made comparisons difficult. Benchmarking, however, stimulated learning and exchange of knowledge. It encouraged interaction and thereby learning on the tactical and operational levels, which is also an incentive to attract and motivate staff. Although international hospital benchmarking seems to be a rational process of sharing performance data, this case study showed that it is highly dependent on social processes and a learning environment. It can be useful for diagnostics, helping local hospitals to catalyze performance improvements.
Doliwa, B.; Arévalo, E.; Weiland, T.
2007-10-01
The study of beam dynamics and the localization of potential sources of instabilities are important tasks in the design of modern, high-intensity particle accelerators. In the case of synchrotrons and storage rings, coupling impedance data are needed to characterize the parasitic interaction of critical components with the beam. In this article we demonstrate the application of numerical field simulations to the computation of transverse kicker coupling impedances. Based on the 3D simulation results, a parametrized model is developed to incorporate the impedance of an arbitrary pulse-forming network attached to the kicker. Detailed comparisons of numerical results with twin-wire and direct measurements are discussed at the example of the Spallation Neutron Source extraction kicker.
Directory of Open Access Journals (Sweden)
B. Doliwa
2007-10-01
Full Text Available The study of beam dynamics and the localization of potential sources of instabilities are important tasks in the design of modern, high-intensity particle accelerators. In the case of synchrotrons and storage rings, coupling impedance data are needed to characterize the parasitic interaction of critical components with the beam. In this article we demonstrate the application of numerical field simulations to the computation of transverse kicker coupling impedances. Based on the 3D simulation results, a parametrized model is developed to incorporate the impedance of an arbitrary pulse-forming network attached to the kicker. Detailed comparisons of numerical results with twin-wire and direct measurements are discussed at the example of the Spallation Neutron Source extraction kicker.
Comparison of calculated energy flux of internal tides with microstructure measurements
Directory of Open Access Journals (Sweden)
Saeed Falahat
2014-10-01
Full Text Available Vertical mixing caused by breaking of internal tides plays a major role in maintaining the deep-ocean stratification. This study compares observations of dissipation from microstructure measurements to calculations of the vertical energy flux from barotropic to internal tides, taking into account the temporal variation due to the spring-neap tidal cycle. The dissipation data originate from two surveys in the Brazil Basin Tracer Release Experiment (BBTRE, and one over the LArval Dispersal along the Deep East Pacific Rise (LADDER3, supplemented with a few stations above the North-Atlantic Ridge (GRAVILUCK and in the western Pacific (IZU. A good correlation is found between logarithmic values of energy flux and local dissipation in BBTRE, suggesting that the theory is able to predict energy fluxes. For the LADDER3, the local dissipation is much smaller than the calculated energy flux, which is very likely due to the different topographic features of BBTRE and LADDER3. The East Pacific Rise consists of a few isolated seamounts, so that most of the internal wave energy can radiate away from the generation site, whereas the Brazil Basin is characterised by extended rough bathymetry, leading to a more local dissipation. The results from all four field surveys support the general conclusion that the fraction of the internal-tide energy flux that is dissipated locally is very different in different regions.
Comparison of different approaches to the numerical calculation of the LMJ focal
Directory of Open Access Journals (Sweden)
Bourgeade A.
2013-11-01
Full Text Available The beam smoothing in the focal plane of high power lasers is of particular importance to laser-plasma interaction studies in order to minimize plasma parametric and hydrodynamic instabilities on the target. Here we investigate the focal spot structure in different geometrical configurations where standard paraxial hypotheses are no longer verified. We present numerical studies in the cases of single flat top square beam, LMJ quadruplet and complete ring of quads with large azimuth angle. Different calculations are made with Fresnel diffraction propagation model in the paraxial approximation and full vector Maxwell's equations. The first model is based on Fourier transform from near to far field method. The second model uses first spherical wave decomposition in plane waves with Fourier transform and propagates them to the focal spot. These two different approaches are compared with Miró [1] modeling results using paraxial or Feit and Fleck options. The methods presented here are generic for focal spot calculations. They can be used for other complex geometric configurations and various smoothing techniques. The results will be used as boundary conditions in plasma interaction computations.
Comparison of different approaches to the numerical calculation of the LMJ focal
Bourgeade, A.; Coïc, H.; Goossens, J.-P.
2013-11-01
The beam smoothing in the focal plane of high power lasers is of particular importance to laser-plasma interaction studies in order to minimize plasma parametric and hydrodynamic instabilities on the target. Here we investigate the focal spot structure in different geometrical configurations where standard paraxial hypotheses are no longer verified. We present numerical studies in the cases of single flat top square beam, LMJ quadruplet and complete ring of quads with large azimuth angle. Different calculations are made with Fresnel diffraction propagation model in the paraxial approximation and full vector Maxwell's equations. The first model is based on Fourier transform from near to far field method. The second model uses first spherical wave decomposition in plane waves with Fourier transform and propagates them to the focal spot. These two different approaches are compared with Miró [1] modeling results using paraxial or Feit and Fleck options. The methods presented here are generic for focal spot calculations. They can be used for other complex geometric configurations and various smoothing techniques. The results will be used as boundary conditions in plasma interaction computations.
Jacob, D; Palacios, J J
2011-01-28
We study the performance of two different electrode models in quantum transport calculations based on density functional theory: parametrized Bethe lattices and quasi-one-dimensional wires or nanowires. A detailed account of implementation details in both the cases is given. From the systematic study of nanocontacts made of representative metallic elements, we can conclude that the parametrized electrode models represent an excellent compromise between computational cost and electronic structure definition as long as the aim is to compare with experiments where the precise atomic structure of the electrodes is not relevant or defined with precision. The results obtained using parametrized Bethe lattices are essentially similar to the ones obtained with quasi-one-dimensional electrodes for large enough cross-sections of these, adding a natural smearing to the transmission curves that mimics the true nature of polycrystalline electrodes. The latter are more demanding from the computational point of view, but present the advantage of expanding the range of applicability of transport calculations to situations where the electrodes have a well-defined atomic structure, as is the case for carbon nanotubes, graphene nanoribbons, or semiconducting nanowires. All the analysis is done with the help of codes developed by the authors which can be found in the quantum transport toolbox ALACANT and are publicly available.
Directory of Open Access Journals (Sweden)
V. A. Gribkov
2015-01-01
Full Text Available We consider the multilink pendulum system consisting of six physical pendulums. A pendulum (carrier has inertia parameters, which significantly exceed the remaining (carried ones placed on the carrier. In addition to the system under analysis, in particular, the paper presents a design scheme for a two-stage liquid fuel rocket using pendulums as the analogues of fluctuating fuel. Pendulum models also find application to solve problems of stabilization of space tether systems. The objective of the study is to determine dynamic characteristics of the said sixmembered pendulum system, as well as to identify specific dynamic properties inherent in objects of this kind. Dynamic characteristics of the system are determined by calculations. A physical model of the pendulum allowed us to compare the calculated and experimental results. To conduct the frequency tests of the pendulum model three pilot units have been created. The first two units turned out to be inappropriate for fulfilling the experimental tasks for various reasons. The third unit enabled us to obtain desirable experimental results. The "calculation–experiment” discrepancy on the natural frequencies of the pendulum model for the majority of frequencies was less than 5%. We analyzed the dynamic features of multilink pendulum systems "carried by the carrier unit links". The analysis results are applicable to the above-noted object classes of rocket and space technology.
SINBAD: Shielding integral benchmark archive and database
International Nuclear Information System (INIS)
Hunter, H.T.; Ingersoll, D.T.; Roussin, R.W.
1996-01-01
SINBAD is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily retrieve and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity
International Nuclear Information System (INIS)
Carrasco, P.; Jornet, N.; Duch, M.A.; Weber, L.; Ginjaume, M.; Eudaldo, T.; Jurado, D.; Ruiz, A.; Ribas, M.
2004-01-01
An extensive set of benchmark measurement of PDDs and beam profiles was performed in a heterogeneous layer phantom, including a lung equivalent heterogeneity, by means of several detectors and compared against the predicted dose values by different calculation algorithms in two treatment planning systems. PDDs were measured with TLDs, plane parallel and cylindrical ionization chambers and beam profiles with films. Additionally, Monte Carlo simulations by meansof the PENELOPE code were performed. Four different field sizes (10x10, 5x5, 2x2, and1x1 cm 2 ) and two lung equivalent materials (CIRS, ρ e w =0.195 and St. Bartholomew Hospital, London, ρ e w =0.244-0.322) were studied. The performance of four correction-based algorithms and one based on convolution-superposition was analyzed. The correction-based algorithms were the Batho, the Modified Batho, and the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system and the TMS Pencil Beam from the Helax-TMS (Nucletron) treatment planning system. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. The only studied calculation methods that correlated successfully with the measured values with a 2% average inside all media were the Collapsed Cone and the Monte Carlo simulation. The biggest difference between the predicted and the delivered dose in the beam axis was found for the EqTAR algorithm inside the CIRS lung equivalent material in a 2x2 cm 2 18 MV x-ray beam. In these conditions, average and maximum difference against the TLD measurements were 32% and 39%, respectively. In the water equivalent part of the phantom every algorithm correctly predicted the dose (within 2%) everywhere except very close to the interfaces where differences up to 24% were found for 2x2 cm 2 18 MV photon beams. Consistent values were found between the reference detector (ionization chamber in water and TLD in lung) and Monte Carlo simulations, yielding minimal differences (0
International Nuclear Information System (INIS)
Carrasco, P.; Jornet, N.; Duch, M. A.; Panettieri, V.; Weber, L.; Eudaldo, T.; Ginjaume, M.; Ribas, M.
2007-01-01
To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10x10, 5x5, and 2x2 cm 2 ) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2x2 cm 2 field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values within
Carrasco, P; Jornet, N; Duch, M A; Panettieri, V; Weber, L; Eudaldo, T; Ginjaume, M; Ribas, M
2007-08-01
To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10 x 10, 5 x 5, and 2 x 2 cm2) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2 x 2 cm2 field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values
Investigations of the BSS-6 problem from the ANL benchmark problem book
International Nuclear Information System (INIS)
Babanakov, D.M.; Suslov, I.R.
1996-01-01
Results of extended numerical investigations of solutions to the BSS-6 problems from the ANL Benchmark Problem Book are presented. The influence of the space discretization error is evaluated for different space finite-difference schemes and for all of the BSS-6 problems; asymptotical (mesh size independent) solutions to the problems are obtained. On the basis of an analytical solution technique, a comparison analysis of time calculational schemes used in the BSS-6 problems is carried out. A modification of the Newton method for ill-conditioned systems of nonlinear algebraic equations arising within the framework of the analytical solution technique is outlined. (author)
Jet Substructure at the Tevatron and LHC: New results, new tools, new benchmarks
Altheimer, A; Asquith, L; Brooijmans, G; Butterworth, J; Campanelli, M; Chapleau, B; Cholakian, A E; Chou, J P; Dasgupta, M; Davison, A; Dolen, J; Ellis, S D; Essig, R; Fan, J J; Field, R; Fregoso, A; Gallicchio, J; Gershtein, Y; Gomes, A; Haas, A; Halkiadakis, E; Halyo, V; Hoeche, S; Hook, A; Hornig, A; Huang, P; Izaguirre, E; Jankowiak, M; Kribs, G; Krohn, D; Larkoski, A J; Lath, A; Lee, C; Lee, S J; Loch, P; Maksimovic, P; Martinez, M; Miller, D W; Plehn, T; Prokofiev, K; Rahmat, R; Rappoccio, S; Safonov, A; Salam, G P; Schumann, S; Schwartz, M D; Schwartzman, A; Seymour, M; Shao, J; Sinervo, P; Son, M; Soper, D E; Spannowsky, M; Stewart, I W; Strassler, M; Strauss, E; Takeuchi, M; Thaler, J; Thomas, S; Tweedie, B; Vasquez Sierra, R; Vermilion, C K; Villaplana, M; Vos, M; Wacker, J; Walker, D; Walsh, J R; Wang, L-T; Wilbur, S; Yavin, I; Zhu, W
2012-01-01
In this report we review recent theoretical progress and the latest experimental results in jet substructure from the Tevatron and the LHC. We review the status of and outlook for calculation and simulation tools for studying jet substructure. Following up on the report of the Boost 2010 workshop, we present a new set of benchmark comparisons of substructure techniques, focusing on the set of variables and grooming methods that are collectively known as "top taggers". To facilitate further exploration, we have attempted to collect, harmonise, and publish software implementations of these techniques.
Status on benchmark testing of CENDL-3
Liu Ping
2002-01-01
CENDL-3, the newest version of China Evaluated Nuclear Data Library has been finished, and distributed for some benchmarks analysis recently. The processing was carried out using the NJOY nuclear data processing code system. The calculations and analysis of benchmarks were done with Monte Carlo code MCNP and reactor lattice code WIMSD5A. The calculated results were compared with the experimental results based on ENDF/B6. In most thermal and fast uranium criticality benchmarks, the calculated k sub e sub f sub f values with CENDL-3 were in good agreements with experimental results. In the plutonium fast cores, the k sub e sub f sub f values were improved significantly with CENDL-3. This is duo to reevaluation of the fission spectrum and elastic angular distributions of sup 2 sup 3 sup 9 Pu and sup 2 sup 4 sup 0 Pu. CENDL-3 underestimated the k sub e sub f sub f values compared with other evaluated data libraries for most spherical or cylindrical assemblies of plutonium or uranium with beryllium
Comparison between theoretical calculations and experiment simulating a loss of coolant accident
International Nuclear Information System (INIS)
Sharma, R.; Sri Ram, K.; Sengupta, A.; Kaul, P.N.
1979-01-01
To simulate a loss of coolant accident in PWR, an experimental loop was set up and the maximum critical mass flow rate Gsub(c) was measured. The experimental loop is described in detail. Experimental data were obtained for four different pressures, namely, 3.52, 4.40, 6.10, 7.80 kg/cm 2 . Due to low heat fluxes temperature transients were not observed and hence temperature was not measured. The measured values of Gsub(c) were compared with the theoretical ones as calculated by Mood's, Fauske's and Henry's Models. Poor agreement shows the need for improvement in both the theoretical models and experimental techniques. (M.G.B.)
Hinuma, Yoyo; Hayashi, Hiroyuki; Kumagai, Yu; Tanaka, Isao; Oba, Fumiyasu
2017-09-01
High-throughput first-principles calculations based on density functional theory (DFT) are a powerful tool in data-oriented materials research. The choice of approximation to the exchange-correlation functional is crucial as it strongly affects the accuracy of DFT calculations. This study compares performance of seven approximations, six of which are based on Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation (GGA) with and without Hubbard U and van der Waals corrections (PBE, PBE+U, PBED3, PBED3+U, PBEsol, and PBEsol+U), and the strongly constrained and appropriately normed (SCAN) meta-GGA on the energetics and crystal structure of elementary substances and binary oxides. For the latter, only those with closed-shell electronic structures are considered, examples of which include C u2O , A g2O , MgO, ZnO, CdO, SnO, PbO, A l2O3 , G a2O3 , I n2O3 , L a2O3 , B i2O3 , Si O2 , Sn O2 , Pb O2 , Ti O2 , Zr O2 , Hf O2 , V2O5 , N b2O5 , T a2O5 , Mo O3 , and W O3 . Prototype crystal structures are selected from the Inorganic Crystal Structure Database (ICSD) and cation substitution is used to make a set of existing and hypothetical oxides. Two indices are proposed to quantify the extent of lattice and internal coordinate relaxation during a calculation. The former is based on the second invariant and determinant of the transformation matrix of basis vectors from before relaxation to after relaxation, and the latter is derived from shifts of internal coordinates of atoms in the unit cell. PBED3, PBEsol, and SCAN reproduce experimental lattice parameters of elementary substances and oxides well with few outliers. Notably, PBEsol and SCAN predict the lattice parameters of low dimensional structures comparably well with PBED3, even though these two functionals do not explicitly treat van der Waals interactions. SCAN gives formation enthalpies and Gibbs free energies closest to experimental data, with mean errors (MEs) of 0.01 and -0.04 eV, respectively, and root
Comparison of optics and electronics for the calculation of matrix-vector products
Gary, C. K.
1992-01-01
Optical processors are attractive because of their ability to perform massively parallel operations such as matrix vector products. The inherently analog nature of optical calculations requires that optical processors be based on analog computations. While the speed at which such analog operations can be performed as well as the natural parallelism of optical systems are great advantages of optical processors, the analog representation of values severely limits the achievable accuracy. Furthermore, optical processors are limited by the need to convert information to and from the intensity of light. Digitization can be used to increase the accuracy of optical matrix-vector processors, but causes a severe reduction in speed. This paper compares the throughput and power requirements of optical and electronic processors, showing that optical matrix-vector processors can provide a greater number of operations/Watt than conventional electronics.
3D field calculation of the GEM prototype magnet and comparison with measurements
Energy Technology Data Exchange (ETDEWEB)
Lari, R.J.
1983-10-28
The proposed 4 GeV Electron Microtron (GEM) is designed to fill the existing buildings left vacant by the demise of the Zero Gradient Synchrotron (ZGS) accelerator. One of the six large dipole magnets is shown as well as the first 10 electron orbits. A 3-orbit prototype magnet has been built. The stepped edge of the magnet is to keep the beam exiting perpendicular to the pole. The end guards that wrap around the main coils are joined together by the 3 shield plates. The auxiliary coils are needed to keep the end guards and shield plates from saturating. A 0.3 cm Purcell filter air gap exists between the pole and the yoke. Can anyone question this being a truly three-dimensional magnetostatic problem. The computer program TOSCA, developed at the Rutherford Appleton Laboratory by the Computing Applications Group, was used to calculate this magnet and the results have been compared with measurements.
Comparison of Model Calculations of Biological Damage from Exposure to Heavy Ions with Measurements
Kim, Myung-Hee Y.; Hada, Megumi; Cucinotta, Francis A.; Wu, Honglu
2014-01-01
The space environment consists of a varying field of radiation particles including high-energy ions, with spacecraft shielding material providing the major protection to astronauts from harmful exposure. Unlike low-LET gamma or X rays, the presence of shielding does not always reduce the radiation risks for energetic charged-particle exposure. Dose delivered by the charged particle increases sharply at the Bragg peak. However, the Bragg curve does not necessarily represent the biological damage along the particle path since biological effects are influenced by the track structures of both primary and secondary particles. Therefore, the ''biological Bragg curve'' is dependent on the energy and the type of the primary particle and may vary for different biological end points. Measurements of the induction of micronuclei (MN) have made across the Bragg curve in human fibroblasts exposed to energetic silicon and iron ions in vitro at two different energies, 300 MeV/nucleon and 1 GeV/nucleon. Although the data did not reveal an increased yield of MN at the location of the Bragg peak, the increased inhibition of cell progression, which is related to cell death, was found at the Bragg peak location. These results are compared to the calculations of biological damage using a stochastic Monte-Carlo track structure model, Galactic Cosmic Ray Event-based Risk Model (GERM) code (Cucinotta, et al., 2011). The GERM code estimates the basic physical properties along the passage of heavy ions in tissue and shielding materials, by which the experimental set-up can be interpreted. The code can also be used to describe the biophysical events of interest in radiobiology, cancer therapy, and space exploration. The calculation has shown that the severely damaged cells at the Bragg peak are more likely to go through reproductive death, the so called "overkill".
A new calculation of atmospheric neutrino flux: the FLUKA approach
International Nuclear Information System (INIS)
Battistoni, G.; Bloise, C.; Cavalli, D.; Ferrari, A.; Montaruli, T.; Rancati, T.; Resconi, S.; Ronga, F.; Sala, P.R.
1999-01-01
Preliminary results from a full 3-D calculation of atmospheric neutrino fluxes using the FLUKA interaction model are presented and compared to previous existing calculations. This effort is motivated mainly by the 3-D capability and the satisfactory degree of accuracy of the hadron-nucleus models embedded in the FLUKA code. Here we show examples of benchmarking tests of the model with cosmic ray experiment results. A comparison of our calculation of the atmospheric neutrino flux with that of the Bartol group, for E ν > 1 GeV, is presented
International Nuclear Information System (INIS)
Liew, S.L.; Ku, L.P.; Kolibal, J.G.
1985-10-01
Realistic calculations of the neutron and γ-ray fluences in the TFTR diagnostic basement have been carried out with three-dimensional Monte Carlo models. Comparisons with measurements show that the results are well within the experimental uncertainties
McCarthy, Martin; Gavanski, Gregory M; Paton, Katherine E; Holland, Simon P
2011-05-01
To evaluate and compare published methods of intraocular lens (IOL) power calculation after myopic laser refractive surgery in a large, multi-surgeon study. Retrospective case series. A total of 173 eyes of 117 patients who had uneventful LASIK (89) or photorefractive keratectomy (84) for myopia and subsequent cataract surgery. Data were collected from primary sources in patient charts. The Clinical History Method (vertex corrected to the corneal plane), the Aramberri Double-K, the Latkany Flat-K, the Feiz and Mannis, the R-Factor, the Corneal Bypass, the Masket (2006), the Haigis-L, and the Shammas.cd postrefractive adjustment methods were evaluated in conjunction with third- and fourth-generation optical vergence formulas, as appropriate. Intraocular lens power required for emmetropia was back-calculated using stable post-cataract surgery manifest refraction and implanted IOL power, and then formula accuracy was compared. Prediction error arithmetic mean ± standard deviation (SD), range (minimum and maximum), and percent within 0 to -1.0 diopters (D), ±0.5 D, ±1.0 D, and ±2.0 D relative to target refraction. The top 5 corneal power adjustment techniques and formula combinations in terms of mean prediction errors, standard deviations, and minimizing hyperopic "refractive surprises" were the Masket with the Hoffer Q formula, the Shammas.cd with the Shammas-PL formula, the Haigis-L, the Clinical History Method with the Hoffer Q, and the Latkany Flat-K with the SRK/T with mean arithmetic prediction errors and standard deviations of -0.18±0.87 D, -0.10±1.02 D, -0.26±1.13 D, -0.27±1.04 D, and -0.37±0.91 D, respectively. By using these methods, 70% to 85% of eyes could achieve visual outcomes within 1.0 D of target refraction. The Shammas and the Haigis-L methods have the advantage of not requiring potentially inaccurate historical information. Copyright © 2011 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Comparison of Model Calculations of Biological Damage from Exposure to Heavy Ions with Measurements
Kim, Myung-Hee Y.; Wu, Honglu; Hada, Megumi; Cucinotta, Francis
The space environment consists of a varying field of radiation particles including high-energy ions, with spacecraft shielding material providing the major protection to astronauts from harmful exposure. Unlike low-LET g or X rays, the presence of shielding does not always reduce the radiation risks for energetic charged-particle exposure. Dose delivered by the charged particle increases sharply at the Bragg peak. However, the Bragg curve does not necessarily represent the biological damage along the particle path since biological effects are influenced by the track structures of both primary and secondary particles. Therefore, the ‘‘biological Bragg curve’’ is dependent on the energy and the type of the primary particle and may vary for different biological end points. Measurements of the induction of micronuclei (MN) have made across the Bragg curve in human fibroblasts exposed to energetic silicon and iron ions in vitro at two different energies, 300 MeV/nucleon and 1 GeV/nucleon. Although the data did not reveal an increased yield of MN at the location of the Bragg peak, the increased inhibition of cell progression, which is related to cell death, was found at the Bragg peak location. These results are compared to the calculations of biological damage using a stochastic Monte-Carlo track structure model, Galactic Cosmic Ray Event-based Risk Model (GERM) code (Cucinotta et al., 2011). The GERM code estimates the basic physical properties along the passage of heavy ions in tissue and shielding materials, by which the experimental set-up can be interpreted. The code can also be used to describe the biophysical events of interest in radiobiology, cancer therapy, and space exploration. The calculation has shown that the severely damaged cells at the Bragg peak are more likely to go through reproductive death, the so called “overkill”. F. A. Cucinotta, I. Plante, A. L. Ponomarev, and M. Y. Kim, Nuclear Interactions in Heavy Ion Transport and Event
Energy Technology Data Exchange (ETDEWEB)
Wu, Vincent W.C., E-mail: htvinwu@polyu.edu.hk [Department of Health Technology and Informatics, Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong SAR (Hong Kong); Tse, Teddy K.H.; Ho, Cola L.M.; Yeung, Eric C.Y. [Department of Health Technology and Informatics, Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong SAR (Hong Kong)
2013-07-01
Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each case by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time.
International Nuclear Information System (INIS)
Wu, Vincent W.C.; Tse, Teddy K.H.; Ho, Cola L.M.; Yeung, Eric C.Y.
2013-01-01
Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each case by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time
Development of common user data model for APOLLO3 and MARBLE and application to benchmark problems
International Nuclear Information System (INIS)
Yokoyama, Kenji
2009-07-01
A Common User Data Model, CUDM, has been developed for the purpose of benchmark calculations between APOLLO3 and MARBLE code systems. The current version of CUDM was designed for core calculation benchmark problems with 3-dimensional Cartesian, 3-D XYZ, geometry. CUDM is able to manage all input/output data such as 3-D XYZ geometry, effective macroscopic cross section, effective multiplication factor and neutron flux. In addition, visualization tools for geometry and neutron flux were included. CUDM was designed by the object-oriented technique and implemented using Python programming language. Based on the CUDM, a prototype system for a benchmark calculation, CUDM-benchmark, was also developed. The CUDM-benchmark supports input/output data conversion for IDT solver in APOLLO3, and TRITAC and SNT solvers in MARBLE. In order to evaluate pertinence of CUDM, the CUDM-benchmark was applied to benchmark problems proposed by T. Takeda, G. Chiba and I. Zmijarevic. It was verified that the CUDM-benchmark successfully reproduced the results calculated with reference input data files, and provided consistent results among all the solvers by using one common input data defined by CUDM. In addition, a detailed benchmark calculation for Chiba benchmark was performed by using the CUDM-benchmark. Chiba benchmark is a neutron transport benchmark problem for fast criticality assembly without homogenization. This benchmark problem consists of 4 core configurations which have different sodium void regions, and each core configuration is defined by more than 5,000 fuel/material cells. In this application, it was found that the results by IDT and SNT solvers agreed well with the reference results by Monte-Carlo code. In addition, model effects such as quadrature set effect, S n order effect and mesh size effect were systematically evaluated and summarized in this report. (author)
Directory of Open Access Journals (Sweden)
Faruk Kaya
2016-05-01
Full Text Available AIM:To compare the intraocular lens(IOLpower calculations and refractive outcomes obtained with a new optical biometer and standard ultrasonic biometer in phacoemulsification surgery.METHODS:Thirty-seven eyes of 37 cataract patients who underwent phacoemulsification with IOL implantation were included in this prospective comparative study. The same operator performed biometer measurements in eyes with cataract using a new optical biometer(Aladdinand a standard ultrasonic biometer(Sonomed AB 5500. Biometric parameters; axial length(AL, keratometric(Kreadings, anterior chamber depth(ACDand IOL power obtained by two devices were recorded. Postoperative actual refractive errors and errors predicted by two devices according to SRK/T formula were analyzed. The mean estimation error(EE, mean absolute estimation error(AEEand the biometric parameters obtained by two biometers were compared.RESULTS:The AL measured by Aladdin(23.45±0.73 mmwas significantly longer than AL by ultrasonic biometer(23.2±0.75 mm(P=0.01. The mean EE and AEE values obtained by Aladdin were significantly smaller than the values by ultrasonic biometer(P=0.0006 and 0.03 respectively. The higher percentage of eyes within ±0.5 and ±1.00 D of target refraction was also found by using Aladdin(67% and 97%.CONCLUSION:The Aladdin optical biometer showed better accuracy and yielded better refractive outcomes compared with ultrasonic biometer.
International Nuclear Information System (INIS)
Almeida, Laila G.; Amaral, Leonardo L.; Oliveira, Harley F.; Maia, Ana F.
2012-01-01
The virtual wedge is useful tool in the radiation treatment planning since it has series of advantages over the hard wedge. Quality control tests ensure correct performance of the planning done in treatment planning systems (TPS). This study aimed to compare doses calculated by TPS and doses measured by ionization chamber (CI) and an ionization chambers array in virtual wedge photon beams of 6 MV. Measures carried out in Primus linear accelerator with a solid water phantom and dosimeter positioned at 10 cm depth with gantry at 0° in many fields sizes and angles in the virtual wedge. Measurements on the central axis used as dosimeter an IC and on off-axis used an IC array. The simulation in CMS-XiO used the CT images of the phantom in the same configuration of the irradiation. Maximum and minimum values of the percentage differences between the doses provided by TPS and measurements with ionization chamber on the central axis were 1.43 and -0.10%, respectively, with average percentage difference of 0.08% and confidence limit of Δ=1.72%. In the region off-axis, the average percentage difference was 0.04%, with a maximum of 1.9%, minimum of 0% and confidence limit of Δ=1.91%. All values for dose percentage differences were below 2% and lower confidence limit of 3% are thus, according to the recommendations of the Technical Report Series - TRS-430. (author)
Energy Technology Data Exchange (ETDEWEB)
Soederberg, M.
1990-01-01
The GAROS system for general analysis of rotating aeroelastic structures is used to analyse the behaviour of the WTS 3 Maglarp wind turbine, situated in Maglarp, Sweden. The GAROS method is based on a branch mode technique and modal reduction. Stability analyses of the total tower-rotor system as well as numerical time integration analyses are performed within the GAROS system. The FE-model made by Anders Henoch, AIMS, for FFA is used as a base for this investigation. Some modifications concerning center of gravity of the rotor and teeter bearing/hinge have been made in the finite element model. In this report simulated values of loads in the blades are compaed to measured data. The unstable yaw behaviour of the WTS 3 was also found in the behaviour of the model. The amount of yaw-angle rotation in the model was also found to be according to measured values when induced velocities were accounted for in the aerodynamic force calculations. The analyses in general show good agreement between simulated and measured values. This paper was presented at the European Wind Energy Conference in Glasgow, 10-13 July, 1989.
A Comparison Study of Machine Learning Based Algorithms for Fatigue Crack Growth Calculation.
Wang, Hongxun; Zhang, Weifang; Sun, Fuqiang; Zhang, Wei
2017-05-18
The relationships between the fatigue crack growth rate ( d a / d N ) and stress intensity factor range ( Δ K ) are not always linear even in the Paris region. The stress ratio effects on fatigue crack growth rate are diverse in different materials. However, most existing fatigue crack growth models cannot handle these nonlinearities appropriately. The machine learning method provides a flexible approach to the modeling of fatigue crack growth because of its excellent nonlinear approximation and multivariable learning ability. In this paper, a fatigue crack growth calculation method is proposed based on three different machine learning algorithms (MLAs): extreme learning machine (ELM), radial basis function network (RBFN) and genetic algorithms optimized back propagation network (GABP). The MLA based method is validated using testing data of different materials. The three MLAs are compared with each other as well as the classical two-parameter model ( K * approach). The results show that the predictions of MLAs are superior to those of K * approach in accuracy and effectiveness, and the ELM based algorithms show overall the best agreement with the experimental data out of the three MLAs, for its global optimization and extrapolation ability.
Gribovszki, Zoltán
2017-11-01
Methods that use diurnal groundwater-level fluctuations are commonly used for shallow water-table environments to estimate evapotranspiration (ET) and recharge. The key element needed to obtain reliable estimates is the specific yield (Sy), a soil-water storage parameter that depends on unsaturated soil-moisture and water-table fluxes, among others. Soil-moisture profile measurement down to the water table, along with water-table-depth measurements, can provide a good opportunity to calculate Sy values even on a sub-daily scale. These values were compared with Sy estimates derived by traditional techniques, and it was found that slug-test-based Sy values gave the most similar results in a sandy soil environment. Therefore, slug-test methods, which are relatively cheap and require little time, were most suited to estimate Sy using diurnal fluctuations. The reason for this is that the timeframe of the slug-test measurement is very similar to the dynamic of the diurnal signal. The dynamic characteristic of Sy was also analyzed on a sub-daily scale (depending mostly on the speed of drainage from the soil profile) and a remarkable difference was found in Sy with respect to the rate of change of the water table. When comparing constant and sub-daily (dynamic) Sy values for ET estimation, the sub-daily Sy application yielded higher correlation, but only a slightly smaller deviation from the control ET method, compared with the usage of constant Sy.
Comparison of organ dosimetry methods and effective dose calculation methods for paediatric CT.
Brady, Z; Cain, T M; Johnston, P N
2012-06-01
Computed tomography (CT) is the single biggest ionising radiation risk from anthropogenic exposure. Reducing unnecessary carcinogenic risks from this source requires the determination of organ and tissue absorbed doses to estimate detrimental stochastic effects. In addition, effective dose can be used to assess comparative risk between exposure situations and facilitate dose reduction through optimisation. Children are at the highest risk from radiation induced carcinogenesis and therefore dosimetry for paediatric CT recipients is essential in addressing the ionising radiation health risks of CT scanning. However, there is no well-defined method in the clinical environment for routinely and reliably performing paediatric CT organ dosimetry and there are numerous methods utilised for estimating paediatric CT effective dose. Therefore, in this study, eleven computational methods for organ dosimetry and/or effective dose calculation were investigated and compared with absorbed doses measured using thermoluminescent dosemeters placed in a physical anthropomorphic phantom representing a 10 year old child. Three common clinical paediatric CT protocols including brain, chest and abdomen/pelvis examinations were evaluated. Overall, computed absorbed doses to organs and tissues fully and directly irradiated demonstrated better agreement (within approximately 50 %) with the measured absorbed doses than absorbed doses to distributed organs or to those located on the periphery of the scan volume, which showed up to a 15-fold dose variation. The disparities predominantly arose from differences in the phantoms used. While the ability to estimate CT dose is essential for risk assessment and radiation protection, identifying a simple, practical dosimetry method remains challenging.
A Comparison of Monte Carlo and Deterministic Solvers for keff and Sensitivity Calculations
Energy Technology Data Exchange (ETDEWEB)
Haeck, Wim [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, Donald Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); White, Morgan Curtis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Saller, Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-12-12
Verification and validation of our solutions for calculating the neutron reactivity for nuclear materials is a key issue to address for many applications, including criticality safety, research reactors, power reactors, and nuclear security. Neutronics codes solve variations of the Boltzmann transport equation. The two main variants are Monte Carlo versus deterministic solutions, e.g. the MCNP [1] versus PARTISN [2] codes, respectively. There have been many studies over the decades that examined the accuracy of such solvers and the general conclusion is that when the problems are well-posed, either solver can produce accurate results. However, the devil is always in the details. The current study examines the issue of self-shielding and the stress it puts on deterministic solvers. Most Monte Carlo neutronics codes use continuous-energy descriptions of the neutron interaction data that are not subject to this effect. The issue of self-shielding occurs because of the discretisation of data used by the deterministic solutions. Multigroup data used in these solvers are the average cross section and scattering parameters over an energy range. Resonances in cross sections can occur that change the likelihood of interaction by one to three orders of magnitude over a small energy range. Self-shielding is the numerical effect that the average cross section in groups with strong resonances can be strongly affected as neutrons within that material are preferentially absorbed or scattered out of the resonance energies. This affects both the average cross section and the scattering matrix.
International Nuclear Information System (INIS)
Franchi, M; Ricci, L
2014-01-01
The embedding of time series provides a valuable, and sometimes indispensable, tool in order to analyze the dynamical properties of a chaotic system. To this purpose, the choice of the embedding dimension and lag is decisive. The scientific literature describes several methods for selecting the most appropriate parameter pairs. Unfortunately, no conclusive criterion to decide which method – and thus which embedding pair – is the best has been so far devised. A widely employed quantity to compare different methods is the maximum Lyapunov exponent (MLE) because, for chaotic systems that have explicit analytic representations, MLE can be numerically evaluated independently of the embedding dimension and lag. Within this framework, we investigated the dependence on the calculated MLE on the embedding dimension and lag in the case of three dynamical systems that are also widespreadly used as reference systems, namely the Lorenz, Rössler and Mackey-Glass attractors. By also taking into account the statistical fluctuations of the calculated MLE, we propose a new method to assess which systems provide suitable test benches for the comparison of different embedding methods via MLE calculation. For example we found that, despite of its popularity in this scientific context, the Rössler attractor is not a reliable workbench to test the validity of an embedding method
Yang, Zi-Yi; Tsai, Pi-En; Lee, Shao-Chun; Liu, Yen-Chiang; Chen, Chin-Cheng; Sato, Tatsuhiko; Sheu, Rong-Jiun
2017-09-01
The dose distributions from proton pencil beam scanning were calculated by FLUKA, GEANT4, MCNP, and PHITS, in order to investigate their applicability in proton radiotherapy. The first studied case was the integrated depth dose curves (IDDCs), respectively from a 100 and a 226-MeV proton pencil beam impinging a water phantom. The calculated IDDCs agree with each other as long as each code employs 75 eV for the ionization potential of water. The second case considered a similar condition of the first case but with proton energies in a Gaussian distribution. The comparison to the measurement indicates the inter-code differences might not only due to different stopping power but also the nuclear physics models. How the physics parameter setting affect the computation time was also discussed. In the third case, the applicability of each code for pencil beam scanning was confirmed by delivering a uniform volumetric dose distribution based on the treatment plan, and the results showed general agreement between each codes, the treatment plan, and the measurement, except that some deviations were found in the penumbra region. This study has demonstrated that the selected codes are all capable of performing dose calculations for therapeutic scanning proton beams with proper physics settings.
Directory of Open Access Journals (Sweden)
Yang Zi-Yi
2017-01-01
Full Text Available The dose distributions from proton pencil beam scanning were calculated by FLUKA, GEANT4, MCNP, and PHITS, in order to investigate their applicability in proton radiotherapy. The first studied case was the integrated depth dose curves (IDDCs, respectively from a 100 and a 226-MeV proton pencil beam impinging a water phantom. The calculated IDDCs agree with each other as long as each code employs 75 eV for the ionization potential of water. The second case considered a similar condition of the first case but with proton energies in a Gaussian distribution. The comparison to the measurement indicates the inter-code differences might not only due to different stopping power but also the nuclear physics models. How the physics parameter setting affect the computation time was also discussed. In the third case, the applicability of each code for pencil beam scanning was confirmed by delivering a uniform volumetric dose distribution based on the treatment plan, and the results showed general agreement between each codes, the treatment plan, and the measurement, except that some deviations were found in the penumbra region. This study has demonstrated that the selected codes are all capable of performing dose calculations for therapeutic scanning proton beams with proper physics settings.
Directory of Open Access Journals (Sweden)
Samin Poudel
2017-11-01
Full Text Available The refractive index (RI is an important parameter in describing the radiative impacts of aerosols. It is important to constrain the RI of aerosol components, since there is still significant uncertainty regarding the RI of biomass burning aerosols. Experimentally measured extinction cross-sections, scattering cross-sections, and single scattering albedos for white pine biomass burning (BB aerosols under two different burning and sampling conditions were modeled using T-matrix theory. The refractive indices were extracted from these calculations. Experimental measurements were conducted using a cavity ring-down spectrometer to measure the extinction, and a nephelometer to measure the scattering of size-selected aerosols. BB aerosols were obtained by burning white pine using (1 an open fire in a burn drum, where the aerosols were collected in distilled water using an impinger, and then re-aerosolized after several days, and (2 a tube furnace to directly introduce the BB aerosols into an indoor smog chamber, where BB aerosols were then sampled directly. In both cases, filter samples were also collected, and electron microscopy images were used to obtain the morphology and size information used in the T-matrix calculations. The effective radius of the particles collected on filter media from the open fire was approximately 245 nm, whereas it was approximately 76 nm for particles from the tube furnace burns. For samples collected in distilled water, the real part of the RI increased with increasing particle size, and the imaginary part decreased. The imaginary part of the RI was also significantly larger than the reported values for fresh BB aerosol samples. For the particles generated in the tube furnace, the real part of the RI decreased with particle size, and the imaginary part was much smaller and nearly constant. The RI is sensitive to particle size and sampling method, but there was no wavelength dependence over the range considered (500
Energy Technology Data Exchange (ETDEWEB)
Poletiko, C.; Hueber, C. [Inst. de Protection et de Surete Nucleaire, C.E. Cadarache, St. Paul-lez-Durance (France); Fabre, B. [CISI, C.E. Cadarache, St. Paul-lez-Durance (France)
1996-12-01
In case of severe nuclear accident, radioactive material may be released into the environment. Among the fission products involved, are the very volatile iodine isotopes. However, the chemical forms are not well known due to the presence of different species in the containment with which iodine may rapidly react to form aerosols, molecular iodine, hydroiodic acid and iodo-organics. Tentative explanations of different mechanisms were performed through benchscale tests. A series of tests has been performed at AEA Harwell (GB) to study parameters such as pH, dose rate, concentration, gas flow rate, temperature in relation to molecular iodine production, under dynamic conditions. Another set of tests has been performed in AECL Whiteshell (CA) to study the behaviour of painted coupons, standing in gas phase or liquid phase or both, with iodine compounds under radiation. The purpose of our paper is to synthesize the data and compare the results to the IODE code calculation. Some parameters of the code were studied to fit the experimental result the best. A law, concerning the reverse reaction of iodide radiolytic oxidation, has been proposed versus: pH, concentrations and gas flow-rate. This law does not apply for dose rate variations. For the study of painted coupons, it has been pointed out that molecular iodine tends to be adsorbed or chemically absorbed on the surface in gas phase, but the mechanism should be more sophisticated in the aqueous phase. The iodo-organics present in liquid phase tend to be partly or totally destroyed by oxidation under radiation (depending upon the dose delivered). These points are discussed. (author) 18 figs., 3 tabs., 15 refs.
International Nuclear Information System (INIS)
Hodge, W.L. Jr.
1976-01-01
A multielectron transition is an atomic transition in which two or three electrons change their states and a single photon is emitted. Although the mechanism was postulated in the thirties and observed in optical spectra, little research has been done since then. Experiments using heavy ion accelerators have measured satellite lines lower in energy than the Kα 12 energy and higher in energy than the Kβ satellite structure. These transitions are multielectron transitions. Experimental spectra of x-ray transitions induced by heavy ion bombardment are presented, and the experimental energies are compared to Hartree-Fock transition energies. The transitions observed lower in energy than the Kα line are two electron--one photon radiative Auger and three electron--one photon radiative electron rearrangement transitions. Experimental data taken at other laboratories have measured satellite lines higher in energy than the Kβ satellite structure. Relativistic Dirac-Fock transition energies will be compared to the experimental energies and the transitions will be shown to be two electron--one photon x-ray transitions. Heavy ion bombardment creates multiple inner shell vacancies so numerous that the satellite lines can be more intense than the diagram lines. Theoretical transition energies from five different self-consistent field atomic physics computer programs will be compared to the Kα satellite and Kα hypersatellite transitions of calcium. Transition energies from Declaux's relativistic Dirac-Fock program will be compared to the diagram lines of uranium and to other theoretical K x-ray transition energies of Z = 120. A discussion of how to calculate the term energies of a given configuration using the Slater F and G integrals is included
Energy Technology Data Exchange (ETDEWEB)
Frost, G. J.; Fried, Alan; Lee, Y.- N.; Wert, B.; Henry, B.; Drummond, J. R.; Evans, M. J.; Fehsenfeld, Fred C.; Goldan, P. D.; Holloway, J. S.; Hubler, Gerhard F.; Jakoubek, R.; Jobson, B Tom T.; Knapp, K.; Kuster, W. C.; Roberts, J.; Rudolph, Jochen; Ryerson, T. B.; Stohl, A.; Stroud, C.; Sueper, D. T.; Trainer, Michael; Williams, J.
2002-04-18
Formaldehyde (CH2O) measurements from two independent instruments are compared with photochemical box model calculations. The measurements were made on the National Oceanic and Atmospheric Administration P-3 aircraft as part of the 1997 North Atlantic Regional Experiment (NARE 97). The data set considered here consists of air masses sampled between 0 and 8 km over the North Atlantic Ocean which do not show recent influence from emissions or transport. These air masses therefore should be in photochemical steady state with respect to CH2O when constrained by the other P-3 measurements, and methane oxidation was expected to be the predominant source of CH2O in these air masses. For this data set both instruments measured identical CH2O concentrations to within 40 parts per trillion by volume (pptv) on average over the 0–800 pptv range, although differences larger than the combined 2s total uncertainty estimates were observed between the two instruments in 11% of the data. Both instruments produced higher CH2O concentrations than the model in more than 90% of this data set, with a median measured-modeled [CH2O] difference of 0.13 or 0.18 ppbv (depending on the instrument), or about a factor of 2. Such large differences cannot be accounted for by varying model input parameters within their respective uncertainty ranges. After examining the possible reasons for the model-measurement discrepancy, we conclude that there are probably one or more additional unknown sources of CH2O in the North Atlantic troposphere.
Benchmarking foreign electronics technologies
Energy Technology Data Exchange (ETDEWEB)
Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.
1994-12-01
This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.
Calculation of neutron spectra produced in neutron generator target: Code testing.
Gaganov, V V
2018-03-01
DT-neutron spectra calculated using the SRIANG code was benchmarked against the results obtained by widely used Monte Carlo codes: PROFIL, SHORIN, TARGET, ENEA-JSI, MCUNED, DDT and NEUSDESC. The comparison of the spectra obtained by different codes confirmed the correctness of SRIANG calculations. The cross-checking of the compared spectra revealed some systematic features and possible errors of analysed codes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Comparison of Two Prostate Cancer Risk Calculators that Include the Prostate Health Index.
Roobol, Monique J; Vedder, Moniek M; Nieboer, Daan; Houlgatte, Alain; Vincendeau, Sébastien; Lazzeri, Massimo; Guazzoni, Giorgio; Stephan, Carsten; Semjonow, Axel; Haese, Alexander; Graefen, Markus; Steyerberg, Ewout W
2015-09-01
Risk prediction models for prostate cancer (PCa) have become important tools in reducing unnecessary prostate biopsies. The Prostate Health Index (PHI) may increase the predictive accuracy of such models. To compare two PCa risk calculators (RCs) that include PHI. We evaluated the predictive performance of a previously developed PHI-based nomogram and updated versions of the European Randomized Study of Screening for Prostate Cancer (ERSPC) RCs based on digital rectal examination (DRE): RC3 (no prior biopsy) and RC4 (prior biopsy). For the ERSPC updates, the original RCs were recalibrated and PHI was added as a predictor. The PHI-updated ERSPC RCs were compared with the Lughezzani nomogram in 1185 men from four European sites. Outcomes were biopsy-detectable PC and potentially advanced or aggressive PCa, defined as clinical stage >T2b and/or a Gleason score ≥7 (clinically relevant PCa). The PHI-updated ERSPC models had a combined area under the curve for the receiver operating characteristic (AUC) of 0.72 for all PCa and 0.68 for clinically relevant PCa. For the Lughezzani PHI-based nomogram, AUCs were 0.75 for all PCa and 0.69 for clinically relevant PCa. For men without a prior biopsy, PHI-updated RC3 resulted in AUCs of 0.73 for PCa and 0.66 for clinically relevant PCa. Decision curves confirmed these patterns, although the number of clinically relevant cancers was low. Differences between RCs that include PHI are small. Addition of PHI to an RC leads to further reductions in the rate of unnecessary biopsies when compared to a strategy based on prostate-specific antigen measurement. Risk prediction models for prostate cancer have become important tools in reducing unnecessary prostate biopsies. We compared two risk prediction models for prostate cancer that include the Prostate Health Index. We found that these models are equivalent to each other, and both perform better than the prostate-specific antigen test alone in predicting cancer. Copyright © 2015
H.B. Robinson-2 pressure vessel benchmark
Energy Technology Data Exchange (ETDEWEB)
Remec, I.; Kam, F.B.K.
1998-02-01
The H. B. Robinson Unit 2 Pressure Vessel Benchmark (HBR-2 benchmark) is described and analyzed in this report. Analysis of the HBR-2 benchmark can be used as partial fulfillment of the requirements for the qualification of the methodology for calculating neutron fluence in pressure vessels, as required by the U.S. Nuclear Regulatory Commission Regulatory Guide DG-1053, Calculational and Dosimetry Methods for Determining Pressure Vessel Neutron Fluence. Section 1 of this report describes the HBR-2 benchmark and provides all the dimensions, material compositions, and neutron source data necessary for the analysis. The measured quantities, to be compared with the calculated values, are the specific activities at the end of fuel cycle 9. The characteristic feature of the HBR-2 benchmark is that it provides measurements on both sides of the pressure vessel: in the surveillance capsule attached to the thermal shield and in the reactor cavity. In section 2, the analysis of the HBR-2 benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed with three multigroup libraries based on ENDF/B-VI: BUGLE-93, SAILOR-95 and BUGLE-96. The average ratio of the calculated-to-measured specific activities (C/M) for the six dosimeters in the surveillance capsule was 0.90 {+-} 0.04 for all three libraries. The average C/Ms for the cavity dosimeters (without neptunium dosimeter) were 0.89 {+-} 0.10, 0.91 {+-} 0.10, and 0.90 {+-} 0.09 for the BUGLE-93, SAILOR-95 and BUGLE-96 libraries, respectively. It is expected that the agreement of the calculations with the measurements, similar to the agreement obtained in this research, should typically be observed when the discrete-ordinates method and ENDF/B-VI libraries are used for the HBR-2 benchmark analysis.
Murchie, Brent; Tandon, Kanwarpreet; Hakim, Seifeldin; Shah, Kinchit; O'Rourke, Colin; Castro, Fernando J
2017-04-01
Colorectal cancer (CRC) screening guidelines likely over-generalizes CRC risk, 35% of Americans are not up to date with screening, and there is growing incidence of CRC in younger patients. We developed a practical prediction model for high-risk colon adenomas in an average-risk population, including an expanded definition of high-risk polyps (≥3 nonadvanced adenomas), exposing higher than average-risk patients. We also compared results with previously created calculators. Patients aged 40 to 59 years, undergoing first-time average-risk screening or diagnostic colonoscopies were evaluated. Risk calculators for advanced adenomas and high-risk adenomas were created based on age, body mass index, sex, race, and smoking history. Previously established calculators with similar risk factors were selected for comparison of concordance statistic (c-statistic) and external validation. A total of 5063 patients were included. Advanced adenomas, and high-risk adenomas were seen in 5.7% and 7.4% of the patient population, respectively. The c-statistic for our calculator was 0.639 for the prediction of advanced adenomas, and 0.650 for high-risk adenomas. When applied to our population, all previous models had lower c-statistic results although one performed similarly. Our model compares favorably to previously established prediction models. Age and body mass index were used as continuous variables, likely improving the c-statistic. It also reports absolute predictive probabilities of advanced and high-risk polyps, allowing for more individualized risk assessment of CRC.
Bernardini, M; Smadja, C; Faraggi, M; Orio, S; Petitguillaume, A; Desbrée, A; Ghazzar, N
2014-11-01
Different methods to calculate (90)Y resin microspheres activity for Selective Internal Radiation Therapy (SIRT) were compared. Such comparison is not yet available and is needed in clinical practice to optimize patient specific treatment planning. 32 (99m)Tc-macroagregates (MAA) evaluations were performed, followed by 26 treatments. Four methods to calculate (90)Y-activity were applied retrospectively: three based on Body Surface Area and one based on MIRD formalism, partition model (PM). Relationships between calculated activities, lung breakthrough (LB), the activity concentration ratio between lesions and healthy liver (T/N) and tumour involvement were investigated, where lobar and whole liver treatments were analysed separately. Without attenuation correction, overestimation of LB was 65%. In any case, the estimated lungs' doses remained below 30 Gy. Thus, the maximal injectable activity (MIA) is not limited by lungs' irradiation. Moreover, LB was not significantly related to T/N, neither to tumour involvement nor radiochemical purity (RP). Differences in calculated activity with the four methods were extremely large, in particular they were greater between BSA-based and PM activities for lobar treatments (from -85% to 417%) compared to whole liver treatments (from -49% to 61%). Two values of T/N ratio were identified as thresholds: for BSA-based methods, healthy liver doses are much higher than 30 Gy when T/N 4. As PM accounts for uptake ratio between normal and tumour liver, this method should be employed over BSA-based methods. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Software Engineering Benchmarking Study
Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.
2013-01-01
was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5
Energy Technology Data Exchange (ETDEWEB)
Lievin, A.
1987-02-01
The article deals with a comparison of the calculation methods of Janssen and Reimbert for calculating the pressure extent on the walls of silos by the ensiled pulverulent materials. The methods give very similar result at great depths, but very different results at shallow depths. (MOS).
Energy Technology Data Exchange (ETDEWEB)
Grant, Carlos; Marconi, Javier; Serra, Oscar [Comision Nacional de Energia Atomica, Buenos Aires (Argentina)]. E-mail: grant@cnea.gov.ar; Mollerach, Ricardo; Fink, Jose [Nucleoelectrica Argentina S.A., Buenos Aires (Argentina)]. E-mail: RMollerach@na-sa.com.ar; JFink@na-sa.com.ar
2005-07-01
Nowadays, the increased calculation capacity of modern computers allows us to evaluate the 2D and 3D flux and power distribution of nuclear reactor in a reasonable amount of time using a Monte Carlo method. This method gives results that can be considered the most reliable evaluation of flux and power distribution with a great amount of detail. This is the reason why these results can be considered as benchmark cases that can be used for the validation of other methods. For this purpose, idealized models were calculated using Monte Carlo (code MCNP5) for the ATUCHA I reactor. 2D and 3D cases with and without control rods and channels without fuel element were analyzed. All of them were modeled using a finite element code (DELFIN) and a finite difference code (PUMA). In both cases two energy groups were use. (author)
Directory of Open Access Journals (Sweden)
A. V. Pavlov
1997-08-01
Full Text Available This study compares the OV1-10 satellite measurements of the integral airglow intensities at 630 nm in the SAR arc regions observed in the northern and southern hemisphere as a conjugate phenomenon, with the model results obtained using the time-dependent one-dimensional mathematical model of the Earth ionosphere and plasmasphere (the IZMIRAN model during the geomagnetic storm of the period 15–17 February 1967. The major enhancements to the IZMIRAN model developed in this study are the inclusion of He+ ions (three major ions: O+, H+, and He+, and three ion temperatures, the updated photochemistry and energy balance equations for ions and electrons, the diffusion of NO+ and O2+ ions and O(1D and the revised electron cooling rates arising from their collisions with unexcited N2, O2 molecules and N2 molecules at the first vibrational level. The updated model includes the option to use the models of the Boltzmann or non-Boltzmann distributions of vibrationally excited molecular nitrogen. Deviations from the Boltzmann distribution for the first five vibrational levels of N2 were calculated. The calculated distribution is highly non-Boltzmann at vibrational levels v > 2 and leads to a decrease in the calculated electron density and integral intensity at 630 nm in the northern and southern hemispheres in comparison with the electron density and integral intensity calculated using the Boltzmann vibrational distribution of N2. It is found that the intensity at 630 nm is very sensitive to the oxygen number densities. Good agreement between the modelled and measured intensities is obtained provided that at all altitudes of the southern hemisphere a reduction of about factor 1.35 in MSIS-86 atomic oxygen densities is included in the IZMIRAN model with the non-Boltzmann vibrational distribution of N2. The effect of using of the O(1D diffusion results in the decrease of 4–6% in the calculated integral intensity of the northern hemisphere and 7–13% in
Energy Technology Data Exchange (ETDEWEB)
J. Shi; H. Chen; S. Zheng; D. Li; R.A. Rimmer; H. Wang
2006-06-26
Accurate predications of RF coupling between an RF cavity and ports attached to it have been an important study subject for years for RF coupler and higher order modes (HOM) damping design. We report recent progress and a method on the RF coupling simulations between waveguide ports and RF cavities using CST Microwave Studio in time domain (Transit Solver). Comparisons of the measured and calculated couplings are presented. The simulated couplings and frequencies agree within {approx} 10% and {approx} 0.1% with the measurements, respectively. We have simulated couplings with external Qs ranging from {approx} 100 to {approx} 100,000, and confirmed with measurements. The method should also work well for higher Qs, and can be easily applied in RF power coupler designs and HOM damping for normal-conducting and superconducting cavities.
International Nuclear Information System (INIS)
J. Shi; H. Chen; S. Zheng; D. Li; R.A. Rimmer; H. Wang
2006-01-01
Accurate predications of RF coupling between an RF cavity and ports attached to it have been an important study subject for years for RF coupler and higher order modes (HOM) damping design. We report recent progress and a method on the RF coupling simulations between waveguide ports and RF cavities using CST Microwave Studio in time domain (Transit Solver). Comparisons of the measured and calculated couplings are presented. The simulated couplings and frequencies agree within ∼ 10% and ∼ 0.1% with the measurements, respectively. We have simulated couplings with external Qs ranging from ∼ 100 to ∼ 100,000, and confirmed with measurements. The method should also work well for higher Qs, and can be easily applied in RF power coupler designs and HOM damping for normal-conducting and superconducting cavities
Staff Association
2017-01-01
On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...
EPA's Benchmark Dose Modeling Software
The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...
Benchmarking and Sustainable Transport Policy
DEFF Research Database (Denmark)
Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy
2004-01-01
is generally not advised. Several other ways in which benchmarking and policy can support one another are identified in the analysis. This leads to a range of recommended initiatives to exploit the benefits of benchmarking in transport while avoiding some of the lurking pitfalls and dead ends......Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for sustainable transport. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly sustainable transport...
Present Status and Extensions of the Monte Carlo Performance Benchmark
Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.
2014-06-01
The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.
Benchmarking of the SOURCES3A code for PuBe sources
Energy Technology Data Exchange (ETDEWEB)
Estes, G.P. [Los Alamos National Lab., NM (United States)
1998-12-31
The SOURCES3A code calculates neutron production rates and spectra from ({alpha},n) reactions, spontaneous fission decays, and delayed neutron emission in homogeneous media, interface problems, and alpha-particle beams. SOURCES3A promises to be a very useful tool for generating the characteristics of unique sources for use in neutron transport calculations. A number of comparisons of SOURCES3A results with experiments for various source types were presented in Ref. 1. Here, additional spectral benchmarks with both experiments and other calculations for homogeneous PuBe{sub 13} sources are provided. Preliminary results for neutron production rates from Bu-Pe metal interfaces are also given. Note that SOURCES3A assumes that alpha-particle ranges are small relative to the dimensions of the materials.
Integral benchmark for the attenuation of 14 MeV neutrons passing through lead layers
International Nuclear Information System (INIS)
Antonov, S.; Vojkov, G.; Ilieva, K.; Jordanova, J.
1987-01-01
The attenuation has been studied of the total 14 MeV neutron flux passing through an iron collimator and directly located behind lead slabs with thicknesses 2.5, 5.0, 7.5, 10.0 and 12.5 cm. A comparison is made between the calculated data and results from a benchmark experiment. The experiment is analysed with an one-dimensional model based on ANISN-code and a tree-dimentional model based on MORSE-code (Monte Carlo method). The energy dependences of the cross-sections and neutron fluxes are represented by a 25-group approximation based on the SUPERTOG program and the files ENDL and ENDF/B-4. The mean deviation of MORSE calculated data from experimental ones is 6.4% for ENDL-data and 8.5% for EDDF-data. ANISN-calculated data show practically full coincidence with experimental ones both for ENDL and ENDF
Ricci, Marilena; Lofrumento, Cristiana; Becucci, Maurizio; Castellucci, Emilio M
2018-01-05
Using time-dependent density functional theory in conjunction with B3LYP functional and LANL2DZ/6-31+g(d,p) basis sets, static and pre-resonance Raman spectra of the indigo-Ag 2 complex have been calculated. Structure optimization, excitation energies and pre-resonance Raman spectra of the indigo molecule have been obtained at the same level of theory. The available experimental Raman spectra at 1064, 785 and 514nm and the SERS spectra at 785 and 514nm have been well reproduced by the calculation. Experimental SERS spectra are confronted with the calculated pre-resonance Raman spectra obtained for the indigo-Ag 2 complex. The Raman activities calculated under the infinite lifetime approximation show a strong dependence upon the proximity to the energy and the oscillator strength of the excitation electronic transition. The comparison of the integrated EFs for indigo and indigo-Ag 2 calculated Raman spectra, gave some hints as to the enhancement mechanisms acting for the different excitation wavelengths. Whereas for excitation at a wavelength corresponding to 785nm, the enhancement mechanism for the Raman spectrum of the metal complex seems the chemical one, the strong increment (ten times) of the integrated EF of the Raman spectra of the complex in the case of 514nm excitation, suggests the onset of other enhancement mechanisms. Assuming that intra-cluster transitions with high oscillator strength can be thought of as to mimic surface plasmons excitations, we suggest the onset of the electromagnetic mechanisms (EM) as the origin of the Raman spectrum enhancement. Nevertheless, other enhancement effects cannot be ruled out, as a new molecular transition gains strength in the proximity of the excitation wavelength, as a consequence of the symmetry lowering of the molecule in the complex. A large variation across vibrational modes, by a factor of at least 10 4 , was found for the EFs. This large variation in the EFs can indicate that B-term Herzberg-Teller scattering
Caon, Martin
2013-09-01
The ADELAIDE voxel model of paediatric anatomy was used with the EGSnrc Monte Carlo code to compare effective dose from computed tomography (CT) calculated with both the ICRP103 and ICRP60 definitions which are different in their tissue weighting factors and in the included tissues. The new tissue weighting factors resulted in a lower effective dose for pelvis CT (than if calculated using ICRP60 tissue weighting factors), by 6.5% but higher effective doses for all other examinations. ICRP103 calculated effective dose for CT abdomen + pelvis was higher by 4.6%, for CT abdomen (by 9.5%), for CT chest + abdomen + pelvis (by 6%), for CT chest + abdomen (by 9.6%), for CT chest (by 10.1%) and for cardiac CT (by 11.5%). These values, along with published values of effective dose from CT that were calculated for both sets of tissue weighting factors were used to determine single values for the ratio ICRP103:ICRP60 calculated effective doses from CT, for seven CT examinations. The following values for ICRP103:ICRP60 are suggested for use to convert ICRP60 calculated effective dose to ICRP103 calculated effective dose for the following CT examinations: Pelvis CT, 0.75; for abdomen CT, abdomen + pelvis CT, chest + abdomen + pelvis CT, 1.00; for chest + abdomen CT, and for chest CT. 1.15; for cardiac CT 1.25.
International Nuclear Information System (INIS)
Caon, Martin
2013-01-01
The ADELAIDE voxel model of paediatric anatomy was used with the EGSnrc Monte Carlo code to compare effective dose from computed tomography (CT) calculated with both the ICRP103 and ICRP60 definitions which are different in their tissue weighting factors and in the included tissues. The new tissue weighting factors resulted in a lower effective dose for pelvis CT (than if calculated using ICRP60 tissue weighting factors), by 6.5 % but higher effective doses for all other examinations. ICRP103 calculated effective dose for CT abdomen + pelvis was higher by 4.6 %, for CT abdomen (by 9.5 %), for CT chest + abdomen + pelvis (by 6 %), for CT chest + abdomen (by 9.6 %), for CT chest (by 10.1 %) and for cardiac CT (by 11.5 %). These values, along with published values of effective dose from CT that were calculated for both sets of tissue weighting factors were used to determine single values for the ratio ICRP103:ICRP60 calculated effective doses from CT, for seven CT examinations. The following values for ICRP103:ICRP60 are suggested for use to convert ICRP60 calculated effective dose to ICRP103 calculated effective dose for the following CT examinations: Pelvis CT, 0.75; for abdomen CT, abdomen + pelvis CT, chest + abdomen + pelvis CT, 1.00; for chest + abdomen CT, and for chest CT. 1.15; for cardiac CT 1.25.
Results of the benchmark for blade structural models, part A
DEFF Research Database (Denmark)
Lekou, D.J.; Chortis, D.; Belen Fariñas, A.
2013-01-01
Task 2.2 of the InnWind.Eu project. The benchmark is based on the reference wind turbine and the reference blade provided by DTU [1]. "Structural Concept developers/modelers" of WP2 were provided with the necessary input for a comparison numerical simulation run, upon definition of the reference blade......A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved within...
International Nuclear Information System (INIS)
Greenwood, L.R.; Kneff, D.W.; Skowronski, R.P.; Mann, F.M.
1983-01-01
Fusion reactors will produce high levels of helium in surrounding materials with a helium (appm)-to-displacement ratio of about 10-to-1 in stainless steel. This high ratio can be obtained in mixed-spectrum reactors, which are used for fusion materials testing, due to unusually high thermal neutron cross sections for the sequential reactions 58 Ni(n,γ) 59 Ni(n,α) 56 Fe. The high-energy (approx. 340 keV) 56 Fe recoils also add significantly to the displacement damage at the rate of one DPA per 567 appm helium. Until now, the calculation of helium production in nickel has been done in a semi-empirical manner due to a lack of evaluated cross sections for 59 Ni. However, this approach cannot be readily transferred between different reactors since we do not know the contributions from epithermal neutrons in different neutron spectra. A new evaluation of the 59 Ni cross sections has recently been completed, permitting us to calculate all of the required reaction rates for any given neutron spectrum. Radiometric dosimetry and helium measurements have recently been completed for several different mixed-spectrum reactors. Precise comparisons of the helium production cross sections and measurements can thus be made in well-characterized neutron spectra. Data are presented for several recent fusion materials irradiations in the Oak Ridge Research Reactor and High Flux Isotopes Reactor at Oak Ridge National Laboratory and for the Experimental Breeder Reactor II at Argonne National Laboratory. Procedures are recommended for calculating helium production for nickel-bearing materials in any neutron spectrum
Virtual machine performance benchmarking.
Langer, Steve G; French, Todd
2011-10-01
The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising.